Tracking de l'application VApp (IHM du jeu)

This commit is contained in:
2025-05-11 18:04:12 +02:00
commit 89e9db9b62
17763 changed files with 3718499 additions and 0 deletions

1624
VApp/node_modules/@videojs/http-streaming/CHANGELOG.md generated vendored Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,30 @@
# CONTRIBUTING
We welcome contributions from everyone!
## Getting Started
Make sure you have Node.js 8 or higher and npm installed.
1. Fork this repository and clone your fork
1. Install dependencies: `npm install`
1. Run a development server: `npm start`
### Making Changes
Refer to the [video.js plugin conventions][conventions] for more detail on best practices and tooling for video.js plugin authorship.
When you've made your changes, push your commit(s) to your fork and issue a pull request against the original repository.
### Running Tests
Testing is a crucial part of any software project. For all but the most trivial changes (typos, etc) test cases are expected. Tests are run in actual browsers using [Karma][karma].
- In all available and supported browsers: `npm test`
- In a specific browser: `npm run test:chrome`, `npm run test:firefox`, etc.
- While development server is running (`npm start`), navigate to [`http://localhost:9999/test/`][local]
[karma]: http://karma-runner.github.io/
[local]: http://localhost:9999/test/
[conventions]: https://github.com/videojs/generator-videojs-plugin/blob/main/docs/conventions.md

49
VApp/node_modules/@videojs/http-streaming/LICENSE generated vendored Normal file
View File

@ -0,0 +1,49 @@
Copyright Brightcove, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
The AES decryption implementation in this project is derived from the
Stanford Javascript Cryptography Library
(http://bitwiseshiftleft.github.io/sjcl/). That work is covered by the
following copyright and permission notice:
Copyright 2009-2010 Emily Stark, Mike Hamburg, Dan Boneh.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
The views and conclusions contained in the software and documentation
are those of the authors and should not be interpreted as representing
official policies, either expressed or implied, of the authors.

1130
VApp/node_modules/@videojs/http-streaming/README.md generated vendored Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,62 @@
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
- [Overview](#overview)
- [HTTP Live Streaming](#http-live-streaming)
- [Dynamic Adaptive Streaming over HTTP](#dynamic-adaptive-streaming-over-http)
- [Further Documentation](#further-documentation)
- [Helpful Tools](#helpful-tools)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
# Overview
This project supports both [HLS][hls] and [MPEG-DASH][dash] playback in the video.js player. This document is intended as a primer for anyone interested in contributing or just better understanding how bits from a server get turned into video on their display.
## HTTP Live Streaming
[HLS][apple-hls-intro] has two primary characteristics that distinguish it from other video formats:
- Delivered over HTTP(S): it uses the standard application protocol of the web to deliver all its data
- Segmented: longer videos are broken up into smaller chunks which can be downloaded independently and switched between at runtime
A standard HLS stream consists of a *Main Playlist* which references one or more *Media Playlists*. Each Media Playlist contains one or more sequential video segments. All these components form a logical hierarchy that informs the player of the different quality levels of the video available and how to address the individual segments of video at each of those levels:
![HLS Format](images/hls-format.png)
HLS streams can be delivered in two different modes: a "static" mode for videos that can be played back from any point, often referred to as video-on-demand (VOD); or a "live" mode where later portions of the video become available as time goes by. In the static mode, the Main and Media playlists are fixed. The player is guaranteed that the set of video segments referenced by those playlists will not change over time.
Live mode can work in one of two ways. For truly live events, the most common configuration is for each individual Media Playlist to only include the latest video segment and a small number of consecutive previous segments. In this mode, the player may be able to seek backwards a short time in the video but probably not all the way back to the beginning. In the other live configuration, new video segments can be appended to the Media Playlists but older segments are never removed. This configuration allows the player to seek back to the beginning of the stream at any time during the broadcast and transitions seamlessly to the static stream type when the event finishes.
If you're interested in a more in-depth treatment of the HLS format, check out [Apple's documentation][apple-hls-intro] and the IETF [Draft Specification][hls-spec].
## Dynamic Adaptive Streaming over HTTP
Similar to HLS, [DASH][dash-wiki] content is segmented and is delivered over HTTP(s).
A DASH stream consits of a *Media Presentation Description*(MPD) that describes segment metadata such as timing information, URLs, resolution and bitrate. Each segment can contain either ISO base media file format(e.g MP4) or MPEG-2 TS data. Typically, the MPD will describe the various *Representations* that map to collections of segments at different bitrates to allow bitrate selection. These Representations can be organized as a SegmentList, SegmentTemplate, SegmentBase, or SegmentTimeline.
DASH streams can be delivered in both video-on-demand(VOD) and live streaming modes. In the VOD case, the MPD describes all the segments and representations available and the player can chose which representation to play based on it's capabilities.
Live mode is accomplished using the ISOBMFF Live profile if the segments are in ISOBMFF. There are a few different ways to setup the MPD including but not limited to updating the MPD after an interval of time, using *Periods*, or using the *availabilityTimeOffset* field. A few examples of this are provided by the [DASH Reference Client][dash-if-reference-client]. The MPD will provide enough information for the player to playback the live stream and seek back as far as is specified in the MPD.
If you're interested in a more in-depth description of MPEG-DASH, check out [MDN's tutorial on setting up DASH][mdn-dash-tut] or the [DASHIF Guidelines][dash-if-guide].
# Further Documentation
- [Architechture](arch.md)
- [Glossary](glossary.md)
- [Adaptive Bitrate Switching](bitrate-switching.md)
- [Multiple Alternative Audio Tracks](multiple-alternative-audio-tracks.md)
- [reloadSourceOnError](reload-source-on-error.md)
- [A Walk Through VHS](a-walk-through-vhs.md)
# Helpful Tools
- [FFmpeg](http://trac.ffmpeg.org/wiki/CompilationGuide)
- [Thumbcoil](http://thumb.co.il/): web based video inspector
[hls]: /docs/intro.md#http-live-streaming
[dash]: /docs/intro.md#dynamic-adaptive-streaming-over-http
[apple-hls-intro]: https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/Introduction/Introduction.html
[hls-spec]: https://datatracker.ietf.org/doc/draft-pantos-http-live-streaming/
[dash-wiki]: https://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP
[dash-if-reference-client]: https://reference.dashif.org/dash.js/
[mdn-dash-tut]: https://developer.mozilla.org/en-US/Apps/Fundamentals/Audio_and_video_delivery/Setting_up_adaptive_streaming_media_sources
[dash-if-guide]: http://dashif.org/guidelines/

View File

@ -0,0 +1,248 @@
# A Walk Through VHS
Today we're going to take a walk through VHS. We'll start from a manifest URL and end with video playback.
The purpose of this walk is not to see every piece of code, or define every module. Instead it's about seeing the most important parts of VHS. The goal is to make VHS more approachable.
Lets start with a video tag:
```html
<video>
<source src="http://example.com/manifest.m3u8" type="application/x-mpegURL">
</video>
```
The source, `manifest.m3u8`, is an HLS manifest. You can tell from the `.m3u8` extension and the `type`.
Safari (and a few other browsers) will play that video natively, because Safari supports HLS content. However, other browsers don't support native playback of HLS and will fail to play the video.
VHS provides the ability to play HLS (and DASH) content in browsers that don't support native HLS (and DASH) playback.
Since VHS is a part of Video.js, let's set up a Video.js player for the `<video>`:
```html
<link href="//vjs.zencdn.net/7.10.2/video-js.min.css" rel="stylesheet">
<script src="//vjs.zencdn.net/7.10.2/video.min.js"></script>
<video-js id="myPlayer" class="video-js" data-setup='{}'>
<source src="http://example.com/manifest.m3u8" type="application/x-mpegURL">
</video-js>
```
Video.js does a lot of things, but in the context of VHS, the important feature is a way to let VHS handle playback of the source. To do this, VHS is registered as a Video.js Source Handler. When a Video.js player is created and provided a `<source>`, Video.js goes through its list of registered Source Handlers, including VHS, to see if they're able to play that source.
In this case, because it's an HLS source, VHS will tell Video.js "I can handle that!" From there, VHS is given the URL and it begins its process.
## videojs-http-streaming.js
`VhsSourceHandler` is defined at the [bottom of src/videojs-http-streaming.js](https://github.com/videojs/http-streaming/blob/0964cb4827d9e80aa36f2fa29e35dad92ca84111/src/videojs-http-streaming.js#L1226-L1233).
The function which Video.js calls to see if the `VhsSourceHandler` can handle the source is aptly named `canHandleSource`.
Inside `canHandleSource`, VHS checks the source's `type`. In our case, it sees `application/x-mpegURL`, and, if we're running in a browser with MSE, then it says "I can handle it!" (It actually says "maybe," because in life there are few guarantees, and because the spec says to use "maybe.")
### VhsSourceHandler
Since VHS told Video.js that it can handle the source, Video.js passes the source to `VhsSourceHandler`'s `handleSource` function. That's where VHS really gets going. It creates a new `VhsHandler` object, merges some options, and performs initial setup. For instance, it creates listeners on some `tech` events.
```mermaid
flowchart TD
VhsSourceHandler --> VhsHandler
```
> :information_source: **What should be put in VhsHandler?**
>
> videojs-http-streaming.js is a good place for interfacing with Video.js and other plugins, isolating integrations from the rest of the code.
>
> Here are a couple of examples of what's done within videojs-http-streaming.js:
> * most EME handling for DRM and setup of the [videojs-contrib-eme plugin](https://github.com/videojs/videojs-contrib-eme)
> * mapping/handling of options passed down via Video.js
## PlaylistController
One critical object that `VhsHandler`'s constructor creates is a new `PlaylistController`.
```mermaid
flowchart TD
VhsSourceHandler --> VhsHandler
VhsHandler --> PlaylistController
```
`PlaylistController` is not a great name, and has grown in size to be a bit unwieldy, but it's the hub of VHS. Eventually, it should be broken into smaller pieces, but for now, it handles the creation and management of most of the other VHS modules. Its code can be found in [src/playlist-controller.js](/src/playlist-controller.js).
`PlaylistController` is a lot to say. So we often refer to it as PC.
If you need to find a place where different modules communicate, you will probably end up in PC. Just about all of `VhsHandler` that doesn't interface with Video.js or other plugins, interfaces with PC.
PC's [constructor](/src/playlist-controller.js#L148) does a lot. Instead of listing all of the things it does, let's go step-by-step through the main ones, passing the source we had above.
```html
<video-js id="myPlayer" class="video-js" data-setup='{}'>
<source src="http://example.com/manifest.m3u8" type="application/x-mpegURL">
</video-js>
```
Looking at the `<source>` tag, `VhsSourceHandler` already used the "type" to tell Video.js that it could handle the source. `VhsHandler` took the manifest URL, in this case "manifest.m3u8" and provided it to the constructor of PC.
The first thing that PC must do is download that source, but it doesn't make the request itself. Instead, it creates [this.mainPlaylistLoader_](/src/laylist-controller.js#L263-L265).
```mermaid
flowchart TD
VhsSourceHandler --> VhsHandler
VhsHandler --> PlaylistController
PlaylistController --> PlaylistLoader
```
`mainPlaylistLoader_` is an instance of either the [HLS PlaylistLoader](/src/playlist-loader.js#L379) or the [DashPlaylistLoader](/src/dash-playlist-loader.js#L259).
The names betray their use. They load the playlist. The URL ("manifest.m3u8" here) is given, and the manifest/playlist is downloaded and parsed. If the content is live, the playlist loader also handles refreshing the manifest. For HLS, where manifests point to other manifests, the playlist loader requests those as well.
As for parsing, for HLS, the manifest responses are parsed using [m3u8-parser](https://github.com/videojs/m3u8-parser). For DASH, the manifest response is parsed using [mpd-parser](https://github.com/videojs/mpd-parser). The output of these parsers is a JSON object that VHS understands. The main structure can be seen in the READMEs, e.g., [here](https://github.com/videojs/m3u8-parser#parsed-output).
So what was once a URL in a `<source>` tag was requested and parsed into a JSON object like the following:
```
Manifest {
playlists: [
{
attributes: {},
Manifest
}
],
mediaGroups: { ... },
segments: [ ... ],
...
}
```
Many properties are removed for simplicity. This is a top level manifest (often referred to as a main manifest or a multivariant manifest [from the HLS spec]), and within it there are playlists, each playlist being a Manifest itself. Since the JSON "schema" for main and media playlists is the same, you will see irrelevant properties within any given manifest object. For instance, you might see a `targetDuration` property on the main manifest object, though a main manifest doesn't have a target duration. You can ignore irrelevant properties. Eventually they should be cleaned up, and a proper schema defined for manifest objects.
PC will also use `mainPlaylistLoader_` to select which media playlist is active (e.g., the 720p rendition or the 480p rendition), so that `mainPlaylistLoader_` will only need to refresh that individual playlist if the stream is live.
> :information_source: **Future Work**
>
> The playlist loaders are not the clearest modules. Work has been started on improvements to the loaders and how we use them: https://github.com/videojs/http-streaming/pull/1208
>
> That work makes them much easier to read, but will require changes throughout the rest of the code before the old PlaylistLoader and DashPlaylistLoader code can be removed.
### Media Source Extensions
The next thing PC needs to do is set up a media source for [Media Source Extensions](https://www.w3.org/TR/media-source/). Specifically, it needs to create [this.mediaSource](/src/playlist-controller.js#L208) and its associated [source buffers](/src/playlist-controller.js#L1814). These are where audio and video data will be appended, so that the browser has content to play. But those aren't used directly. Because source buffers can only handle one operation at a time, [this.sourceUpdater_](/src/playlist-controller.js#L232) is created. `sourceUpdater_` is a queue for operations performed on the source buffers. That's pretty much it. So all of the MSE pieces for appending get wrapped up in `sourceUpdater_`.
## Segment Loaders
The SourceUpdater created for MSE above is passed to the segment loaders.
[this.mainSegmentLoader_](/src/playlist-controller.js#L270-L274) is used for muxed content (audio and video in one segment) and for audio or video only streams.
[this.audioSegmentLoader_](/src/playlist-controller.js#L277-L280) is used when the content is demuxed (audio and video in separate playlists).
```mermaid
flowchart TD
VhsSourceHandler --> VhsHandler
VhsHandler --> PlaylistController
PlaylistController --> PlaylistLoader
PlaylistController --> SourceUpdater
PlaylistController --> SegmentLoader
```
Besides options and the `sourceUpdater_` from PC, the segment loaders are given a [playlist](https://github.com/videojs/http-streaming/blob/0964cb4827d9e80aa36f2fa29e35dad92ca84111/src/segment-loader.js#L988). This playlist is a media playlist from the `mainPlaylistLoader_`. So looking back at our parsed manifest object:
```
Manifest {
playlists: [
{
attributes: {},
Manifest
}
],
mediaGroups: { ... },
segments: [ ... ],
...
}
```
The media playlists were those objects found in the `playlists` array. Each segment loader is given one of those.
Segment Loader uses the provided media playlist to determine which segment to download next. It performs this check when [monitorBuffer_](https://github.com/videojs/http-streaming/blob/main/src/segment-loader.js#L1300) is called, which ultimately runs [chooseNextRequest_](https://github.com/videojs/http-streaming/blob/0964cb4827d9e80aa36f2fa29e35dad92ca84111/src/segment-loader.js#L1399). `chooseNextRequest_` looks at the buffer, the current time, and a few other properties to choose what segment to download from the `playlist`'s `segments` array.
### Choosing Segments to Download
VHS uses a strategy called `mediaIndex++` for choosing the next segment, see [here](https://github.com/videojs/http-streaming/blob/0964cb4827d9e80aa36f2fa29e35dad92ca84111/src/segment-loader.js#L1442). This means that, if segment 3 was previously requested, segment 4 should be requested next, and segment 5 after that. Those segment numbers are determined by the HLS [#EXT-X-MEDIA-SEQUENCE tag](https://datatracker.ietf.org/doc/html/draft-pantos-hls-rfc8216bis-10#section-4.4.3.2).
If there are no seeks or rendition changes, `chooseNextRequest_` will rely on the `mediaIndex++` strategy.
If there are seeks or rendition changes, then `chooseNextRequest_` will look at segment timing values via the `SyncController` (created previously in PC), the current time, and the buffer, to determine what the next segment should be, and what it's start time should be (to position it on the timeline).
```mermaid
flowchart TD
VhsSourceHandler --> VhsHandler
VhsHandler --> PlaylistController
PlaylistController --> PlaylistLoader
PlaylistController --> SourceUpdater
PlaylistController --> SegmentLoader
SegmentLoader --> SyncController
```
The `SyncController` has various [strategies](https://github.com/videojs/http-streaming/blob/0964cb4827d9e80aa36f2fa29e35dad92ca84111/src/sync-controller.js#L16) for ensuring that different renditions, which can have different media sequence and segment timing values, can be positioned on the playback timeline successfully. (It is also be [used by PC](/src/playlist-controller.js#L1472) to establish a `seekable` range.)
### Downloading and Appending Segments
If the buffer is not full, and a segment was chosen, then `SegmentLoader` will download and append it. It does this via a [mediaSegmentRequest](https://github.com/videojs/http-streaming/blob/0964cb4827d9e80aa36f2fa29e35dad92ca84111/src/segment-loader.js#L2489).
```mermaid
flowchart TD
VhsSourceHandler --> VhsHandler
VhsHandler --> PlaylistController
PlaylistController --> PlaylistLoader
PlaylistController --> SourceUpdater
PlaylistController --> SegmentLoader
SegmentLoader --> SyncController
SegmentLoader --> mediaSegmentRequest
```
`mediaSegmentRequest` takes a lot of arguments. Most are callbacks. These callbacks provide the data that `SegmentLoader` needs to append the segment. It includes the timing information of the segment, captions, and the segment data.
When the `SegmentLoader` receives timing info events, it can update the source buffer's timestamp offset (via `SourceUpdater`).
When the `SegmentLoader` receives segment data events, it can append the data to the source buffer (via `SourceUpdater`).
```mermaid
flowchart TD
VhsSourceHandler --> VhsHandler
VhsHandler --> PlaylistController
PlaylistController --> PlaylistLoader
PlaylistController --> SourceUpdater
PlaylistController --> SegmentLoader
SegmentLoader --> SyncController
SegmentLoader --> mediaSegmentRequest
SegmentLoader --> SourceUpdater
```
## mediaSegmentRequest
We talked a bit about how `SegmentLoader` uses `mediaSegmentRequest`, but what does [mediaSegmentRequest](https://github.com/videojs/http-streaming/blob/0964cb4827d9e80aa36f2fa29e35dad92ca84111/src/media-segment-request.js#L941) do?
Besides downloading segments, `mediaSegmentRequest` [decrypts](https://github.com/videojs/http-streaming/blob/0964cb4827d9e80aa36f2fa29e35dad92ca84111/src/media-segment-request.js#L621) AES encrypted segments, probes [MP4](https://github.com/videojs/http-streaming/blob/0964cb4827d9e80aa36f2fa29e35dad92ca84111/src/media-segment-request.js#L171) and [TS](https://github.com/videojs/http-streaming/blob/0964cb4827d9e80aa36f2fa29e35dad92ca84111/src/media-segment-request.js#L375) segments for timing info, and [transmuxes](https://github.com/videojs/http-streaming/blob/0964cb4827d9e80aa36f2fa29e35dad92ca84111/src/media-segment-request.js#L280) TS segments into MP4s using [mux.js](https://github.com/videojs/mux.js) so they can be appended to the source buffers.
```mermaid
flowchart TD
VhsSourceHandler --> VhsHandler
VhsHandler --> PlaylistController
PlaylistController --> PlaylistLoader
PlaylistController --> SourceUpdater
PlaylistController --> SegmentLoader
SegmentLoader --> SyncController
SegmentLoader --> mediaSegmentRequest
SegmentLoader --> SourceUpdater
mediaSegmentRequest --> mux.js
```
## Video playback begins
The video can start playing as soon as there's enough audio and video (for muxed streams) in the buffer to move the playhead forwards. So playback may begin before the `SegmentLoader` completes its full cycle.
But once `SegmentLoader` does finish, it starts the process again, looking for new content.
There are other modules, and other functions of the code (e.g., excluding logic, ABR, etc.), but this is the most critical path of VHS, the one that allows video to play in the browser.

28
VApp/node_modules/@videojs/http-streaming/docs/arch.md generated vendored Normal file
View File

@ -0,0 +1,28 @@
## HLS Project Overview
This project has three primary duties:
1. Download and parse playlist files
1. Implement the [HTMLVideoElement](https://html.spec.whatwg.org/multipage/embedded-content.html#the-video-element) interface
1. Feed content bits to a SourceBuffer by downloading and transmuxing video segments
### Playlist Management
The [playlist loader](playlist-loader.md) ([source](../src/playlist-loader.js)) handles all of the details of requesting, parsing, updating, and switching playlists at runtime. It's operation is described by this state diagram:
![Playlist Loader States](images/playlist-loader-states.nomnoml.svg)
During VOD playback, the loader will move quickly to the HAVE_METADATA state and then stay there unless a quality switch request sends it to SWITCHING_MEDIA while it fetches an alternate playlist. The loader enters the HAVE_CURRENT_METADATA when a live stream is detected and it's time to refresh the current media playlist to find out about new video segments.
### HLS Tech
Currently, the HLS project integrates with [video.js](http://www.videojs.com/) as a [tech](https://github.com/videojs/video.js/blob/main/docs/guides/tech.md). That means it's responsible for providing an interface that closely mirrors the `<video>` element. You can see that implementation in [videojs-http-streaming.js](../src/videojs-http-streaming.js), the primary entry point of the project.
### Transmuxing
Most browsers don't have support for the file type that HLS video segments are stored in. To get HLS playing back on those browsers, contrib-hls strings together a number of technologies:
1. The [Netstream](http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/NetStream.html) in [video.js SWF](https://github.com/videojs/video-js-swf) has a special mode of operation that allows binary video data packaged as an [FLV](http://en.wikipedia.org/wiki/Flash_Video) to be provided directly
1. [videojs-contrib-media-sources](https://github.com/videojs/videojs-contrib-media-sources) provides an abstraction layer over the SWF that operates like a [Media Source](https://w3c.github.io/media-source/#mediasource)
1. A pure javascript transmuxer that repackages HLS segments as FLVs
Transmuxing is the process of transforming media stored in one container format into another container without modifying the underlying media data. If that last sentence doesn't make any sense to you, check out the [Introduction to Media](media.md) for more details.
### Buffer Management
Buffering in contrib-hls is driven by two functions in videojs-hls.js: fillBuffer() and drainBuffer(). During its operation, contrib-hls periodically calls fillBuffer() which determines when more video data is required and begins a segment download if so. Meanwhile, drainBuffer() is invoked periodically during playback to process incoming segments and append them onto the [SourceBuffer](http://w3c.github.io/media-source/#sourcebuffer). In conjunction with a goal buffer length, this producer-consumer relationship drives the buffering behavior of contrib-hls.

View File

@ -0,0 +1,44 @@
# Adaptive Switching Behavior
The HLS tech tries to ensure the highest-quality viewing experience
possible, given the available bandwidth and encodings. This doesn't
always mean using the highest-bitrate rendition available-- if the player
is 300px by 150px, it would be a big waste of bandwidth to download a 4k
stream. By default, the player attempts to load the highest-bitrate
variant that is less than the most recently detected segment bandwidth,
with one condition: if there are multiple variants with dimensions greater
than the current player size, it will only switch up one size greater
than the current player size.
If you're the visual type, the whole process is illustrated
below. Whenever a new segment is downloaded, we calculate the download
bitrate based on the size of the segment and the time it took to
download:
![New bitrate info is available](images/bitrate-switching-1.png)
First, we filter out all the renditions that have a higher bitrate
than the new measurement:
![Bitrate filtering](images/bitrate-switching-2.png)
Then we get rid of any renditions that are bigger than the current
player dimensions:
![Resolution filtering](images/bitrate-switching-3.png)
We don't want to signficant quality drop just because your player is
one pixel too small, so we add back in the next highest
resolution. The highest bitrate rendition that remains is the one that
gets used:
![Final selection](images/bitrate-switching-4.png)
If it turns out no rendition is acceptable based on the filtering
described above, the first encoding listed in the main playlist will
be used.
If you'd like your player to use a different set of priorities, it's
possible to completely replace the rendition selection logic. For
instance, you could always choose the most appropriate rendition by
resolution, even though this might mean more stalls during playback.
See the documentation on `player.vhs.selectPlaylist` for more details.

View File

@ -0,0 +1,47 @@
# Content Steering
Content Steering provides content creators a method of runtime control over
the location from which segments are fetched via a content steering server and
pathways defined in the content manifest. For a working example visit
https://www.content-steering.com/.
HLS and DASH each define their own specific Content Steering tags and properties
that prescribe how the client should fetch the content steering manifest as well
as make steering decisions. `#EXT-X-CONTENT-STEERING` and `<ContentSteering>` respectively.
For reference, HLS spec section 4.4.6.6:
https://datatracker.ietf.org/doc/html/draft-pantos-hls-rfc8216bis#section-4.4.6.6
DASH-IF:
https://dashif.org/docs/DASH-IF-CTS-00XX-Content-Steering-Community-Review.pdf
Both protocols rely on a content steering server to provide steering guidance.
VHS will request the content steering manifest from the location defined in the
content steering tag in the `.m3u8` or `.mpd` and refresh the steering manifest
at an interval defined in that manifest.
A content steering manifest response will look something like this:
```
{
"VERSION": 1,
"TTL": 300,
"RELOAD-URI": "https://steeringservice.com/app/instance12345?session=abc",
"CDN-PRIORITY": ["beta","alpha"]
}
```
`CDN-PRIORITY` represents either `PATHWAY-PRIORITY` for HLS or `SERVICE-LOCATION-PRIORITY` for DASH. This list of keys in priority order will match with either a `PATHWAY-ID` or `serviceLocation` (HLS and DASH respectively) associated with a location where VHS can fetch segments.
VHS will attempt to fetch segments from the locations defined in the steering manifest response in the order. Then, during playback, VHS will provide quality of experience metrics back to the steering server which can adjust the steering guidance accordingly.
## Notable Support
### HLS
* Pathway Cloning
### DASH
* queryBeforeStart
* proxyServerURL
## Currently Missing Support
### DASH
* Extended HTTP GET request parametrization, see: ISO/IEC 23009-1 [2], clause I.3

View File

@ -0,0 +1,264 @@
# Creating Content
## Commands for creating tests streams
### Streams with EXT-X-PROGRAM-DATE-TIME for testing seekToProgramTime and convertToProgramTime
lavfi and testsrc are provided for creating a test stream in ffmpeg
-g 300 sets the GOP size to 300 (keyframe interval, at 30fps, one keyframe every 10 seconds)
-f hls sets the format to HLS (creates an m3u8 and TS segments)
-hls\_time 10 sets the goal segment size to 10 seconds
-hls\_list\_size 20 sets the number of segments in the m3u8 file to 20
-program\_date\_time an hls flag for setting #EXT-X-PROGRAM-DATE-TIME on each segment
```
ffmpeg \
-f lavfi \
-i testsrc=duration=200:size=1280x720:rate=30 \
-g 300 \
-f hls \
-hls_time 10 \
-hls_list_size 20 \
-hls_flags program_date_time \
stream.m3u8
```
## Commands used for segments in `test/segments` dir
### video.ts
Copy only the first two video frames, leave out audio.
```
$ ffmpeg -i index0.ts -vframes 2 -an -vcodec copy video.ts
```
### videoOneSecond.ts
Blank video for 1 second, MMS-Small resolution, start at 0 PTS/DTS, 2 frames per second
```
$ ffmpeg -f lavfi -i color=c=black:s=128x96:r=2:d=1 -muxdelay 0 -c:v libx264 videoOneSecond.ts
```
### videoOneSecond1.ts through videoOneSecond4.ts
Same as videoOneSecond.ts, but follows timing in sequence, with videoOneSecond.ts acting as the 0 index. Each segment starts at the second that its index indicates (e.g., videoOneSecond2.ts has a start time of 2 seconds).
```
$ ffmpeg -i videoOneSecond.ts -muxdelay 0 -output_ts_offset 1 -vcodec copy videoOneSecond1.ts
$ ffmpeg -i videoOneSecond.ts -muxdelay 0 -output_ts_offset 2 -vcodec copy videoOneSecond2.ts
$ ffmpeg -i videoOneSecond.ts -muxdelay 0 -output_ts_offset 3 -vcodec copy videoOneSecond3.ts
$ ffmpeg -i videoOneSecond.ts -muxdelay 0 -output_ts_offset 4 -vcodec copy videoOneSecond4.ts
```
### audio.ts
Copy only the first two audio frames, leave out video.
```
$ ffmpeg -i index0.ts -aframes 2 -vn -acodec copy audio.ts
```
### videoMinOffset.ts
video.ts but with an offset of 0
```
$ ffmpeg -i video.ts -muxpreload 0 -muxdelay 0 -vcodec copy videoMinOffset.ts
```
### audioMinOffset.ts
audio.ts but with an offset of 0. Note that muxed.ts is used because ffmpeg didn't like
the use of audio.ts
```
$ ffmpeg -i muxed.ts -muxpreload 0 -muxdelay 0 -acodec copy -vn audioMinOffset.ts
```
### videoMaxOffset.ts
This segment offsets content such that it ends at exactly the max timestamp before a rollover occurs. It uses the max timestamp of 2^33 (8589934592) minus the segment duration of 6006 (0.066733 seconds) in order to not rollover mid segment, and divides the value by 90,000 to convert it from media time to seconds.
(2^33 - 6006) / 90,000 = 95443.6509556
```
$ ffmpeg -i videoMinOffset.ts -muxdelay 95443.6509556 -muxpreload 95443.6509556 -output_ts_offset 95443.6509556 -vcodec copy videoMaxOffset.ts
```
### audioMaxOffset.ts
This segment offsets content such that it ends at exactly the max timestamp before a rollover occurs. It uses the max timestamp of 2^33 (8589934592) minus the segment duration of 11520 (0.128000 seconds) in order to not rollover mid segment, and divides the value by 90,000 to convert it from media time to seconds.
(2^33 - 11520) / 90,000 = 95443.5896889
```
$ ffmpeg -i audioMinOffset.ts -muxdelay 95443.5896889 -muxpreload 95443.5896889 -output_ts_offset 95443.5896889 -acodec copy audioMaxOffset.ts
```
### videoLargeOffset.ts
This segment offsets content by the rollover threshhold of 2^32 (4294967296) found in the rollover handling of mux.js, adds 1 to ensure there aren't any cases where there's an equal match, then divides the value by 90,000 to convert it from media time to seconds.
(2^32 + 1) / 90,000 = 47721.8588556
```
$ ffmpeg -i videoMinOffset.ts -muxdelay 47721.8588556 -muxpreload 47721.8588556 -output_ts_offset 47721.8588556 -vcodec copy videoLargeOffset.ts
```
### audioLargeOffset.ts
This segment offsets content by the rollover threshhold of 2^32 (4294967296) found in the rollover handling of mux.js, adds 1 to ensure there aren't any cases where there's an equal match, then divides the value by 90,000 to convert it from media time to seconds.
(2^32 + 1) / 90,000 = 47721.8588556
```
$ ffmpeg -i audioMinOffset.ts -muxdelay 47721.8588556 -muxpreload 47721.8588556 -output_ts_offset 47721.8588556 -acodec copy audioLargeOffset.ts
```
### videoLargeOffset2.ts
This takes videoLargeOffset.ts and adds the duration of videoLargeOffset.ts (6006 / 90,000 = 0.066733 seconds) to its offset so that this segment can act as the second in one continuous stream.
47721.8588556 + 0.066733 = 47721.9255886
```
$ ffmpeg -i videoLargeOffset.ts -muxdelay 47721.9255886 -muxpreload 47721.9255886 -output_ts_offset 47721.9255886 -vcodec copy videoLargeOffset2.ts
```
### audioLargeOffset2.ts
This takes audioLargeOffset.ts and adds the duration of audioLargeOffset.ts (11520 / 90,000 = 0.128 seconds) to its offset so that this segment can act as the second in one continuous stream.
47721.8588556 + 0.128 = 47721.9868556
```
$ ffmpeg -i audioLargeOffset.ts -muxdelay 47721.9868556 -muxpreload 47721.9868556 -output_ts_offset 47721.9868556 -acodec copy audioLargeOffset2.ts
```
### caption.ts
Copy the first two frames of video out of a ts segment that already includes CEA-608 captions.
`ffmpeg -i index0.ts -vframes 2 -an -vcodec copy caption.ts`
### id3.ts
Copy only the first five frames of video, leave out audio.
`ffmpeg -i index0.ts -vframes 5 -an -vcodec copy smaller.ts`
Create an ID3 tag using [id3taggenerator][apple_streaming_tools]:
`id3taggenerator -text "{\"id\":1, \"data\": \"id3\"}" -o tag.id3`
Create a file `macro.txt` with the following:
`0 id3 tag.id3`
Run [mediafilesegmenter][apple_streaming_tools] with the small video segment and macro file, to produce a new segment with ID3 tags inserted at the specified times.
`mediafilesegmenter -start-segments-with-iframe --target-duration=1 --meta-macro-file=macro.txt -s -A smaller.ts`
### mp4Video.mp4
Copy only the first two video frames, leave out audio.
movflags:
* frag\_keyframe: "Start a new fragment at each video keyframe."
* empty\_moov: "Write an initial moov atom directly at the start of the file, without describing any samples in it."
* omit\_tfhd\_offset: "Do not write any absolute base\_data\_offset in tfhd atoms. This avoids tying fragments to absolute byte positions in the file/streams." (see also: https://www.w3.org/TR/mse-byte-stream-format-isobmff/#movie-fragment-relative-addressing)
```
$ ffmpeg -i file.mp4 -movflags frag_keyframe+empty_moov+omit_tfhd_offset -vframes 2 -an -vcodec copy mp4Video.mp4
```
### mp4Audio.mp4
Copy only the first two audio frames, leave out video.
movflags:
* frag\_keyframe: "Start a new fragment at each video keyframe."
* empty\_moov: "Write an initial moov atom directly at the start of the file, without describing any samples in it."
* omit\_tfhd\_offset: "Do not write any absolute base\_data\_offset in tfhd atoms. This avoids tying fragments to absolute byte positions in the file/streams." (see also: https://www.w3.org/TR/mse-byte-stream-format-isobmff/#movie-fragment-relative-addressing)
```
$ ffmpeg -i file.mp4 -movflags frag_keyframe+empty_moov+omit_tfhd_offset -aframes 2 -vn -acodec copy mp4Audio.mp4
```
### mp4VideoInit.mp4 and mp4AudioInit.mp4
Using DASH as the format type (-f) will lead to two init segments, one for video and one for audio. Using HLS will lead to one joined.
Renamed from .m4s to .mp4
```
$ ffmpeg -i input.mp4 -f dash out.mpd
```
### webmVideoInit.webm and webmVideo.webm
```
$ cat mp4VideoInit.mp4 mp4Video.mp4 > video.mp4
$ ffmpeg -i video.mp4 -dash_segment_type webm -c:v libvpx-vp9 -f dash output.mpd
$ mv init-stream0.webm webmVideoInit.webm
$ mv chunk-stream0-00001.webm webmVideo.webm
```
### subtitlesEncrypted.vtt
Run subtitles.vtt through subtle crypto. As an example:
```javascript
const fs = require('fs');
const { subtle } = require('crypto').webcrypto;
// first segment has media index 0, so should have the following IV
const DEFAULT_IV = new Uint8Array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]);
const getCryptoKey = async (bytes, iv = DEFAULT_IV) => {
const algorithm = { name: 'AES-CBC', iv };
const extractable = true;
const usages = ['encrypt', 'decrypt'];
return subtle.importKey('raw', bytes, algorithm, extractable, usages);
};
const run = async () => {
const keyFilePath = process.argv[2];
const segmentFilePath = process.argv[3];
const keyBytes = fs.readFileSync(keyFilePath);
const segmentBytes = fs.readFileSync(segmentFilePath);
const key = await getCryptoKey(keyBytes);
const encryptedBytes = await subtle.encrypt({
name: 'AES-CBC',
iv: DEFAULT_IV,
}, key, segmentBytes);
fs.writeFileSync('./encrypted.vtt', new Buffer(encryptedBytes));
console.log(`Wrote ${encryptedBytes.length} bytes to encrypted.vtt:`);
};
run();
```
To use the script:
```
$ node index.js encryptionKey.key subtitles.vtt
```
## Other useful commands
### Joined (audio and video) initialization segment (for HLS)
Using DASH as the format type (-f) will lead to two init segments, one for video and one for audio. Using HLS will lead to one joined.
Note that -hls\_fmp4\_init\_filename defaults to init.mp4, but is here for readability.
Without specifying fmp4 for hls\_segment\_type, ffmpeg defaults to ts.
```
$ ffmpeg -i input.mp4 -f hls -hls_fmp4_init_filename init.mp4 -hls_segment_type fmp4 out.m3u8
```
[apple_streaming_tools]: https://developer.apple.com/documentation/http_live_streaming/about_apple_s_http_live_streaming_tools

View File

@ -0,0 +1,87 @@
# DASH Playlist Loader
## Purpose
The [DashPlaylistLoader][dpl] (DPL) is responsible for requesting MPDs, parsing them and keeping track of the media "playlists" associated with the MPD. The [DPL] is used with a [SegmentLoader] to load fmp4 fragments from a DASH source.
## Basic Responsibilities
1. To request an MPD.
2. To parse an MPD into a format [videojs-http-streaming][vhs] can understand.
3. To refresh MPDs according to their minimumUpdatePeriod.
4. To allow selection of a specific media stream.
5. To sync the client clock with a server clock according to the UTCTiming node.
6. To refresh a live MPD for changes.
## Design
The [DPL] is written to be as similar as possible to the [PlaylistLoader][pl]. This means that majority of the public API for these two classes are the same, and so are the states they go through and events that they trigger.
### States
![DashPlaylistLoader States](images/dash-playlist-loader-states.nomnoml.svg)
- `HAVE_NOTHING` the state before the MPD is received and parsed.
- `HAVE_MAIN_MANIFEST` the state before a media stream is setup but the MPD has been parsed.
- `HAVE_METADATA` the state after a media stream is setup.
### API
- `load()` this will either start or kick the loader during playback.
- `start()` this will start the [DPL] and request the MPD.
- `parseMainXml()` this will parse the MPD manifest and return the result.
- `media()` this will return the currently active media stream or set a new active media stream.
### Events
- `loadedplaylist` signals the setup of a main playlist, representing the DASH source as a whole, from the MPD; or a media playlist, representing a media stream.
- `loadedmetadata` signals initial setup of a media stream.
- `minimumUpdatePeriod` signals that a update period has ended and the MPD must be requested again.
- `playlistunchanged` signals that no changes have been made to a MPD.
- `mediaupdatetimeout` signals that a live MPD and media stream must be refreshed.
- `mediachanging` signals that the currently active media stream is going to be changed.
- `mediachange` signals that the new media stream has been updated.
### Interaction with Other Modules
![DPL with PC and MG](images/dash-playlist-loader-pc-mg-sequence.puml.png)
### Special Features
There are a few features of [DPL] that are different from [PL] due to fundamental differences between HLS and DASH standards.
#### MinimumUpdatePeriod
This is a time period specified in the MPD after which the MPD should be re-requested and parsed. There could be any number of changes to the MPD between these update periods.
#### SyncClientServerClock
There is a UTCTiming node in the MPD that allows the client clock to be synced with a clock on the server. This may affect the results of parsing the MPD.
#### Requesting `sidx` Boxes
To be filled out.
### Previous Behavior
Until version 1.9.0 of [VHS], we thought that [DPL] could skip the `HAVE_NOTHING` and `HAVE_MAIN_MANIFEST` states, as no other XHR requests are needed once the MPD has been downloaded and parsed. However, this is incorrect as there are some Presentations that signal the use of a "Segment Index box" or `sidx`. This `sidx` references specific byte ranges in a file that could contain media or potentially other `sidx` boxes.
A DASH MPD that describes a `sidx` is therefore similar to an HLS main manifest, in that the MPD contains references to something that must be requested and parsed first before references to media segments can be obtained. With this in mind, it was necessary to update the initialization and state transitions of [DPL] to allow further XHR requests to be made after the initial request for the MPD.
### Current Behavior
In [this PR](https://github.com/videojs/http-streaming/pull/386), the [DPL] was updated to go through the `HAVE_NOTHING` and `HAVE_MAIN_MANIFEST` states before arriving at `HAVE_METADATA`. If the MPD does not contain `sidx` boxes, then this transition happens quickly after `load()` is called, spending little time in the `HAVE_MAIN_MANIFEST` state.
The initial media selection for `mainPlaylistLoader` is made in the `loadedplaylist` handler located in [PlaylistController][pc]. We now use `hasPendingRequest` to determine whether to automatically select a media playlist for the `mainPlaylistLoader` as a fallback in case one is not selected by [PC]. The child [DPL]s are created with a media playlist passed in as an argument, so this fallback is not necessary for them. Instead, that media playlist is saved and auto-selected once we enter the `HAVE_MAIN_MANIFEST` state.
The `updateMain` method will return `null` if no updates are found.
The `selectinitialmedia` event is not triggered until an audioPlaylistLoader (which for DASH is always a child [DPL]) has a media playlist. This is signaled by triggering `loadedmetadata` on the respective [DPL]. This event is used to initialize the [Representations API][representations] and setup EME (see [contrib-eme]).
[dpl]: ../src/dash-playlist-loader.js
[sl]: ../src/segment-loader.js
[vhs]: intro.md
[pl]: ../src/playlist-loader.js
[pc]: ../src/playlist-controller.js
[representations]: ../README.md#hlsrepresentations
[contrib-eme]: https://github.com/videojs/videojs-contrib-eme

View File

@ -0,0 +1,23 @@
# Glossary
**Playlist**: This is a representation of an HLS or DASH manifest.
**Media Playlist**: This is a manifest that represents a single rendition or media stream of the source.
**Playlist Controller**: This acts as the main controller for the playback engine. It interacts with the SegmentLoaders, PlaylistLoaders, PlaybackWatcher, etc.
**Playlist Loader**: This will request the source and load the main manifest. It is also instructed by the ABR algorithm to load a media playlist or wraps a media playlist if it is provided as the source. There are more details about the playlist loader [here](./arch.md).
**DASH Playlist Loader**: This will do as the PlaylistLoader does, but for DASH sources. It also handles DASH specific functionaltiy, such as refreshing the MPD according to the minimumRefreshPeriod and synchronizing to a server clock.
**Segment Loader**: This determines which segment should be loaded, requests it via the Media Segment Loader and passes the result to the Source Updater.
**Media Segment Loader**: This requests a given segment, decrypts the segment if necessary, and returns it to the Segment Loader.
**Source Updater**: This manages the browser's [SourceBuffers](https://developer.mozilla.org/en-US/docs/Web/API/SourceBuffer). It appends decrypted segment bytes provided by the Segment Loader to the corresponding Source Buffer.
**ABR(Adaptive Bitrate) Algorithm**: This concept is described more in detail [here](https://en.wikipedia.org/wiki/Adaptive_bitrate_streaming). Our chosen ABR algorithm is referenced by [selectPlaylist](../README.md#hlsselectplaylist) and is described more [here](./bitrate-switching.md).
**Playback Watcher**: This attemps to resolve common playback stalls caused by improper seeking, gaps in content and browser issues.
**Sync Controller**: This will attempt to create a mapping between the segment index and a display time on the player.

20
VApp/node_modules/@videojs/http-streaming/docs/hlse.md generated vendored Normal file
View File

@ -0,0 +1,20 @@
# Encrypted HTTP Live Streaming
The [HLS spec](http://tools.ietf.org/html/draft-pantos-http-live-streaming-13#section-6.2.3) requires segments to be encrypted with AES-128 in CBC mode with PKCS7 padding. You can encrypt data to that specification with a combination of [OpenSSL](https://www.openssl.org/) and the [pkcs7 utility](https://github.com/brightcove/pkcs7). From the command-line:
```sh
# encrypt the text "hello" into a file
# since this is for testing, skip the key salting so the output is stable
# using -nosalt outside of testing is a terrible idea!
echo -n "hello" | pkcs7 | \
openssl enc -aes-128-cbc -nopad -nosalt -K $KEY -iv $IV > hello.encrypted
# xxd is a handy way of translating binary into a format easily consumed by
# javascript
xxd -i hello.encrypted
```
Later, you can decrypt it:
```sh
openssl enc -d -nopad -aes-128-cbc -K $KEY -iv $IV
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 165 KiB

View File

@ -0,0 +1,12 @@
<svg width="280" height="310" version="1.1" baseProfile="full" viewbox="0 0 280 310" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events" style="font-weight:bold; font-size:10pt; font-family:'Arial', Helvetica, sans-serif;;stroke-width:2;stroke-linejoin:round;stroke-linecap:round"><text x="160" y="111" style="font-weight:normal;">load()</text>
<path d="M140 81 L140 105 L140 129 L140 129 " style="stroke:#33322E;fill:none;stroke-dasharray:none;"></path>
<path d="M136.8 121 L140 125 L143.2 121 L140 129 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<text x="160" y="211" style="font-weight:normal;">media()</text>
<path d="M140 181 L140 205 L140 229 L140 229 " style="stroke:#33322E;fill:none;stroke-dasharray:none;"></path>
<path d="M136.8 221 L140 225 L143.2 221 L140 229 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<rect x="63" y="30" height="50" width="154" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="83" y="60" style="">HAVE_NOTHING</text>
<rect x="30" y="130" height="50" width="220" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="50" y="160" style="">HAVE_MAIN_MANIFEST</text>
<rect x="56" y="230" height="50" width="168" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="76" y="260" style="">HAVE_METADATA</text></svg>

After

Width:  |  Height:  |  Size: 1.4 KiB

View File

@ -0,0 +1,125 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
width="744.09448819"
height="1052.3622047"
id="svg2"
version="1.1"
inkscape:version="0.48.2 r9819"
sodipodi:docname="New document 1">
<defs
id="defs4" />
<sodipodi:namedview
id="base"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
inkscape:zoom="0.74898074"
inkscape:cx="405.31989"
inkscape:cy="721.1724"
inkscape:document-units="px"
inkscape:current-layer="layer1"
showgrid="false"
inkscape:window-width="1165"
inkscape:window-height="652"
inkscape:window-x="0"
inkscape:window-y="0"
inkscape:window-maximized="0" />
<metadata
id="metadata7">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:title></dc:title>
</cc:Work>
</rdf:RDF>
</metadata>
<g
inkscape:label="Layer 1"
inkscape:groupmode="layer"
id="layer1">
<g
id="g3832">
<g
transform="translate(-80,0)"
id="g3796">
<rect
style="fill:none;stroke:#000000;stroke-width:4.99253178;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
id="rect3756"
width="195.00757"
height="75.007133"
x="57.496265"
y="302.08554" />
<text
xml:space="preserve"
style="font-size:39.94025421px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
x="80.563461"
y="353.93951"
id="text3758"
sodipodi:linespacing="125%"
transform="scale(0.99841144,1.0015911)"><tspan
sodipodi:role="line"
id="tspan3760"
x="80.563461"
y="353.93951">Header</tspan></text>
</g>
<g
transform="translate(-80,0)"
id="g3801">
<text
xml:space="preserve"
style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
x="278.44489"
y="354.50266"
id="text3762"
sodipodi:linespacing="125%"><tspan
sodipodi:role="line"
id="tspan3764"
x="278.44489"
y="354.50266">Raw Bitstream Payload (RBSP)</tspan></text>
<rect
style="fill:none;stroke:#000000;stroke-width:5;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
id="rect3768"
width="660.63977"
height="75"
x="252.5"
y="302.09293" />
</g>
</g>
<text
xml:space="preserve"
style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
x="10.078175"
y="432.12851"
id="text3806"
sodipodi:linespacing="125%"><tspan
sodipodi:role="line"
id="tspan3808"
x="10.078175"
y="432.12851">1 byte</tspan></text>
<text
xml:space="preserve"
style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
x="-31.193787"
y="252.32137"
id="text3810"
sodipodi:linespacing="125%"><tspan
sodipodi:role="line"
id="tspan3812"
x="-31.193787"
y="252.32137">H264 Network Abstraction Layer (NAL) Unit</tspan></text>
</g>
</svg>

After

Width:  |  Height:  |  Size: 4.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 158 KiB

View File

@ -0,0 +1,26 @@
<svg width="304" height="610" version="1.1" baseProfile="full" viewbox="0 0 304 610" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events" style="font-weight:bold; font-size:10pt; font-family:'Arial', Helvetica, sans-serif;;stroke-width:2;stroke-linejoin:round;stroke-linecap:round"><text x="172" y="111" style="font-weight:normal;">load()</text>
<path d="M152 81 L152 105 L152 129 L152 129 " style="stroke:#33322E;fill:none;stroke-dasharray:none;"></path>
<path d="M148.8 121 L152 125 L155.2 121 L152 129 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<text x="172" y="211" style="font-weight:normal;">media()</text>
<path d="M152 181 L152 205 L152 229 L152 229 " style="stroke:#33322E;fill:none;stroke-dasharray:none;"></path>
<path d="M148.8 221 L152 225 L155.2 221 L152 229 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<text x="172" y="311" style="font-weight:normal;">media()/ start()</text>
<path d="M152 281 L152 305 L152 329 L152 329 " style="stroke:#33322E;fill:none;stroke-dasharray:none;"></path>
<path d="M148.8 321 L152 325 L155.2 321 L152 329 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<path d="M152 381 L152 405 L152 429 L152 429 " style="stroke:#33322E;fill:none;stroke-dasharray:4 4;"></path>
<path d="M148.8 421 L152 425 L155.2 421 L152 429 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<path d="M155.2 389 L152 385 L148.8 389 L152 381 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<path d="M152 481 L152 505 L152 529 L152 529 " style="stroke:#33322E;fill:none;stroke-dasharray:4 4;"></path>
<path d="M148.8 521 L152 525 L155.2 521 L152 529 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<path d="M155.2 489 L152 485 L148.8 489 L152 481 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<rect x="75" y="30" height="50" width="154" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="95" y="60" style="">HAVE_NOTHING</text>
<rect x="42" y="130" height="50" width="220" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="62" y="160" style="">HAVE_MAIN_MANIFEST</text>
<rect x="56" y="230" height="50" width="192" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="76.3" y="260" style="">SWITCHING_MEDIA</text>
<rect x="68" y="330" height="50" width="168" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="88" y="360" style="">HAVE_METADATA</text>
<text x="67" y="460" style="font-weight:normal;font-style:italic;">mediaupdatetimeout</text>
<rect x="30" y="530" height="50" width="244" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="50" y="560" style="">HAVE_CURRENT_METADATA</text></svg>

After

Width:  |  Height:  |  Size: 2.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

View File

@ -0,0 +1,119 @@
@startuml
header DashPlaylistLoader sequences
title DashPlaylistLoader sequences: Main Manifest with Alternate Audio
Participant "PlaylistController" as PC #red
Participant "MainDashPlaylistLoader" as MPL #blue
Participant "mainSegmentLoader" as SL #blue
Participant "AudioDashPlaylistLoader" as APL #green
Participant "audioSegmentLoader" as ASL #green
Participant "external server" as ext #brown
Participant "mpdParser" as parser #orange
Participant "mediaGroups" as MG #purple
Participant Tech #lightblue
== Initialization ==
PC -> MPL : construct MainPlaylistLoader
PC -> MPL: load()
== Requesting Main Manifest ==
MPL -> MPL : start()
MPL -> ext: xhr request for main manifest
ext -> MPL : response with main manifest
MPL -> parser: parse manifest
parser -> MPL: object representing manifest
note over MPL #lightblue: trigger 'loadedplaylist'
== Requesting Video Manifest ==
note over MPL #lightblue: handling loadedplaylist
MPL -> MPL: media(x)
alt if no sidx
note over MPL #lightgray: zero delay to fake network request
else if sidx
break
MPL -> ext: request sidx
end
end
note over MPL #lightblue: trigger 'loadedmetadata' on main loader [T1]
note over MPL #lightblue: handling 'loadedmetadata'
opt vod and preload !== 'none'
MPL -> SL: playlist()
MPL -> SL: load()
end
== Initializing Media Groups, Choosing Active Tracks ==
MPL -> MG: setupMediaGroups()
MG -> MG: initialize()
== Initializing Alternate Audio Loader ==
MG -> APL: create child playlist loader for alt audio
MG -> MG: activeGroup and audio variant selected
MG -> MG: enable activeTrack, onTrackChanged()
MG -> ASL: reset audio segment loader
== Requesting Alternate Audio Manifest ==
MG -> MG: startLoaders()
MG -> APL: load()
APL -> APL: start()
APL -> APL: zero delay to fake network request
break finish pending tasks
MG -> Tech: add audioTrack
MPL -> PC: setupSourceBuffers_()
MPL -> PC: setupFirstPlay()
loop mainSegmentLoader.monitorBufferTick_()
SL -> ext: requests media segments
ext -> SL: response with media segment bytes
end
end
APL -> APL: zero delay over
APL -> APL: media(x)
alt if no sidx
note over APL #lightgray: zero delay to fake network request
else if sidx
break
MPL -> ext: request sidx
end
end
== Requesting Alternate Audio Segments ==
note over APL #lightblue: trigger 'loadedplaylist'
note over APL #lightblue: handling 'loadedplaylist'
APL -> ASL: playlist()
note over ASL #lightblue: trigger 'loadedmetadata' [T2]
note over APL #lightblue: handling 'loadedmetadata'
APL -> ASL: playlist()
APL -> ASL: load()
loop audioSegmentLoader.monitorBufferTick_()
ASL -> ext: requests media segments
ext -> ASL: response with media segment bytes
end
@enduml

View File

@ -0,0 +1,21 @@
#title: DASH Playlist Loader States
#arrowSize: 0.5
#bendSize: 1
#direction: down
#gutter: 10
#edgeMargin: 1
#edges: rounded
#fillArrows: false
#font: Arial
#fontSize: 10
#leading: 1
#lineWidth: 2
#padding: 20
#spacing: 50
#stroke: #33322E
#zoom: 1
#.label: align=center visual=none italic
[HAVE_NOTHING] load()-> [HAVE_MAIN_MANIFEST]
[HAVE_MAIN_MANIFEST] media()-> [HAVE_METADATA]

Binary file not shown.

View File

@ -0,0 +1,114 @@
@startuml
header PlaylistLoader sequences
title PlaylistLoader sequences: Main Manifest and Alternate Audio
Participant "PlaylistController" as PC #red
Participant "MainPlaylistLoader" as MPL #blue
Participant "mainSegmentLoader" as SL #blue
Participant "AudioPlaylistLoader" as APL #green
Participant "audioSegmentLoader" as ASL #green
Participant "external server" as ext #brown
Participant "m3u8Parser" as parser #orange
Participant "mediaGroups" as MG #purple
Participant Tech #lightblue
== Initialization ==
PC -> MPL : construct MainPlaylistLoader
PC -> MPL: load()
MPL -> MPL : start()
== Requesting Main Manifest ==
MPL -> ext: xhr request for main manifest
ext -> MPL : response with main manifest
MPL -> parser: parse main manifest
parser -> MPL: object representing manifest
note over MPL #lightblue: trigger 'loadedplaylist'
== Requesting Video Manifest ==
note over MPL #lightblue: handling loadedplaylist
MPL -> MPL : media()
MPL -> ext : request child manifest
ext -> MPL: child manifest returned
MPL -> parser: parse child manifest
parser -> MPL: object representing the child manifest
note over MPL #lightblue: trigger 'loadedplaylist'
note over MPL #lightblue: handleing 'loadedplaylist'
MPL -> SL: playlist()
MPL -> SL: load()
== Requesting Video Segments ==
note over MPL #lightblue: trigger 'loadedmetadata'
note over MPL #lightblue: handling 'loadedmetadata'
opt vod and preload !== 'none'
MPL -> SL: playlist()
MPL -> SL: load()
end
MPL -> MG: setupMediaGroups()
== Initializing Media Groups, Choosing Active Tracks ==
MG -> APL: create child playlist loader for alt audio
MG -> MG: activeGroup and audio variant selected
MG -> MG: enable activeTrack, onTrackChanged()
MG -> SL: reset mainSegmentLoader
== Requesting Alternate Audio Manifest ==
MG -> MG: startLoaders()
MG -> APL: load()
APL -> APL: start()
APL -> ext: request alt audio media manifest
break finish pending tasks
MG -> Tech: add audioTracks
MPL -> PC: setupSourceBuffers()
MPL -> PC: setupFirstPlay()
loop on monitorBufferTick
SL -> ext: requests media segments
ext -> SL: response with media segment bytes
end
end
ext -> APL: responds with child manifest
APL -> parser: parse child manifest
parser -> APL: object representing child manifest returned
== Requesting Alternate Audio Segments ==
note over APL #lightblue: trigger 'loadedplaylist'
note over APL #lightblue: handling 'loadedplaylist'
APL -> ASL: playlist()
note over APL #lightblue: trigger 'loadedmetadata'
note over APL #lightblue: handling 'loadedmetadata'
APL -> ASL: playlist()
APL -> ASL: load()
loop audioSegmentLoader.load()
ASL -> ext: requests media segments
ext -> ASL: response with media segment bytes
end
@enduml

View File

@ -0,0 +1,25 @@
#title: Playlist Loader States
#arrowSize: 0.5
#bendSize: 1
#direction: down
#gutter: 10
#edgeMargin: 1
#edges: rounded
#fillArrows: false
#font: Arial
#fontSize: 10
#leading: 1
#lineWidth: 2
#padding: 20
#spacing: 50
#stroke: #33322E
#zoom: 1
#.label: align=center visual=none italic
[HAVE_NOTHING] load()-> [HAVE_MAIN_MANIFEST]
[HAVE_MAIN_MANIFEST] media()-> [SWITCHING_MEDIA]
[SWITCHING_MEDIA] media()/ start()-> [HAVE_METADATA]
[HAVE_METADATA] <--> [<label> mediaupdatetimeout]
[<label> mediaupdatetimeout] <--> [HAVE_CURRENT_METADATA]

View File

@ -0,0 +1,13 @@
@startuml
state "Download Segment" as DL
state "Prepare for Append" as PfA
[*] -> DL
DL -> PfA
PfA : transmux (if needed)
PfA -> Append
Append : MSE source buffer
Append -> [*]
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.1 KiB

View File

@ -0,0 +1,57 @@
@startuml
participant SegmentLoader order 1
participant "media-segment-request" order 2
participant "videojs-contrib-media-sources" order 3
participant mux.js order 4
participant "Native Source Buffer" order 5
SegmentLoader -> "media-segment-request" : mediaSegmentRequest(...)
group Request
"media-segment-request" -> SegmentLoader : doneFn(...)
note left
At end of all requests
(key/segment/init segment)
end note
SegmentLoader -> SegmentLoader : handleSegment(...)
note left
"Probe" (parse) segment for
timing and track information
end note
SegmentLoader -> "videojs-contrib-media-sources" : append to "fake" source buffer
note left
Source buffer here is a
wrapper around native buffers
end note
group Transmux
"videojs-contrib-media-sources" -> mux.js : postMessage(...setAudioAppendStart...)
note left
Used for checking for overlap when
prefixing audio with silence.
end note
"videojs-contrib-media-sources" -> mux.js : postMessage(...alignGopsWith...)
note left
Used for aligning gops when overlapping
content (switching renditions) to fix
some browser glitching.
end note
"videojs-contrib-media-sources" -> mux.js : postMessage(...push...)
note left
Pushes bytes into the transmuxer pipeline.
end note
"videojs-contrib-media-sources" -> mux.js : postMessage(...flush...)
"mux.js" -> "videojs-contrib-media-sources" : postMessage(...data...)
"videojs-contrib-media-sources" -> "Native Source Buffer" : append
"Native Source Buffer" -> "videojs-contrib-media-sources" : //updateend//
"videojs-contrib-media-sources" -> SegmentLoader : handleUpdateEnd(...)
end
end
SegmentLoader -> SegmentLoader : handleUpdateEnd_()
note left
Saves segment timing info
and starts next request.
end note
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

View File

@ -0,0 +1,29 @@
@startuml
state "Request Segment" as RS
state "Partial Response (1)" as PR1
state "..." as DDD
state "Partial Response (n)" as PRN
state "Prepare for Append (1)" as PfA1
state "Prepare for Append (n)" as PfAN
state "Append (1)" as A1
state "Append (n)" as AN
[*] -> RS
RS --> PR1
PR1 --> DDD
DDD --> PRN
PR1 -> PfA1
PfA1 : transmux (if needed)
PfA1 -> A1
A1 : MSE source buffer
PRN -> PfAN
PfAN : transmux (if needed)
PfAN -> AN
AN : MSE source buffer
AN --> [*]
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

View File

@ -0,0 +1,109 @@
# LHLS
### Table of Contents
* [Background](#background)
* [Current Support for LHLS in VHS](#current-support-for-lhls-in-vhs)
* [Request a Segment in Pieces](#request-a-segment-in-pieces)
* [Transmux and Append Segment Pieces](#transmux-and-append-segment-pieces)
* [videojs-contrib-media-sources background](#videojs-contrib-media-sources-background)
* [Transmux Before Append](#transmux-before-append)
* [Transmux Within media-segment-request](#transmux-within-media-segment-request)
* [mux.js](#muxjs)
* [The New Flow](#the-new-flow)
* [Resources](#resources)
### Background
LHLS stands for Low-Latency HLS (see [Periscope's post](https://medium.com/@periscopecode/introducing-lhls-media-streaming-eb6212948bef)). It's meant to be used for ultra low latency live streaming, where a server can send pieces of a segment before the segment is done being written to, and the player can append those pieces to the browser, allowing sub segment duration latency from true live.
In order to support LHLS, a few components are required:
* A server that supports [chunked transfer encoding](https://en.wikipedia.org/wiki/Chunked_transfer_encoding).
* A client that can:
* request segment pieces
* transmux segment pieces (for browsers that don't natively support the media type)
* append segment pieces
### Current Support for LHLS in VHS
At the moment, VHS doesn't support any of the client requirements. It waits until a request is completed and the transmuxer expects full segments.
Current flow:
![current flow](./current-flow.plantuml.png)
Expected flow:
![expected flow](./expected-flow.plantuml.png)
### Request Segment Pieces
The first change was to request pieces of a segment. There are a few approaches to accomplish this:
* [Range Requests](https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests)
* requires server support
* more round trips
* [Fetch](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API)
* limited browser support
* doesn't support aborts
* [Plain text MIME type](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Sending_and_Receiving_Binary_Data)
* slightly non-standard
* incurs a cost of converting from string to bytes
*Plain text MIME type* was chosen because of its wide support. It provides a mechanism to access progressive bytes downloaded on [XMLHttpRequest progress events](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequestEventTarget/onprogress).
This change was made in [media-segment-request](/src/media-segment-request.js).
### Transmux and Append Segment Pieces
Getting the progress bytes is easy. Supporting partial transmuxing and appending is harder.
Current flow:
![current transmux and append flow](./current-transmux-and-append-flow.plantuml.png)
In order to support partial transmuxing and appending in the current flow, videojs-contrib-media-sources would have to get more complicated.
##### videojs-contrib-media-sources background
Browsers, via MSE source buffers, only support a limited set of media types. For most browsers, this means MP4/fragmented MP4. HLS uses TS segments (it also supports fragmented MP4, but that case is less common). This is why transmuxing is necessary.
Just like Video.js is a wrapper around the browser video element, bridging compatibility and adding support to extend features, videojs-contrib-media-sources provides support for more media types across different browsers by building in a transmuxer.
Not only did videojs-contrib-media-sources allow us to transmux TS to FMP4, but it also allowed us to transmux TS to FLV for flash support.
Over time, the complexity of logic grew in videojs-contrib-media-sources, and it coupled tightly with videojs-contrib-hls and videojs-http-streaming, firing events to communicate between the two.
Once flash support was moved to a distinct flash module, [via flashls](https://github.com/brightcove/videojs-flashls-source-handler), it was decided to move the videojs-contrib-media-sources logic into VHS, and to remove coupled logic by using only the native source buffers (instead of the wrapper) and transmuxing somewhere within VHS before appending.
##### Transmux Before Append
As the LHLS work started, and videojs-contrib-media-sources needed more logic, the native media source [abstraction leaked](https://en.wikipedia.org/wiki/Leaky_abstraction), adding non-standard functions to work around limitations. In addition, the logic in videojs-contrib-media-sources required more conditional paths, leading to more confusing code.
It was decided that it would be easier to do the transmux before append work in the process of adding support for LHLS. This was widely considered a *good decision*, and provided a means of reducing tech debt while adding in a new feature.
##### Transmux Within media-segment-request
Work started by moving transmuxing into segment-loader, however, we quickly realized that media-segment-request provided a better home.
media-segment-request already handled decrypting segments. If it handled transmuxing as well, then segment-loader could stick with only deciding which segment to request, getting bytes as FMP4, and appending them.
The transmuxing logic moved to a new module called segment-transmuxer, which wrapped around the [WebWorker](https://developer.mozilla.org/en-US/docs/Web/API/Worker/Worker) that wrapped around mux.js (the transmuxer itself).
##### mux.js
While most of the [mux.js pipeline](/docs/diagram.png) supports pushing pieces of data (and should support LHLS by default), its "flushes" to send transmuxed data back to the caller expected full segments.
Much of the pipeline was reused, however, the top level audio and video segment streams, as well as the entry point, were rewritten so that instead of providing a full segment on flushes, each frame of video was provided individually (audio frames still flush as a group). The new concept of partial flushes was added into the pipeline to handle this case.
##### The New Flow
One benefit to transmuxing before appending is the possibility of extracting track and timing information from the segments. Previously, this required a separate parsing step to happen on the full segment. Now, it is included in the transmuxing pipeline, and comes back to us on separate callbacks.
![new segment loader sequence](./new-segment-loader-sequence.plantuml.png)
### Resources
* https://medium.com/@periscopecode/introducing-lhls-media-streaming-eb6212948bef
* https://github.com/jordicenzano/webserver-chunked-growingfiles

View File

@ -0,0 +1,118 @@
@startuml
participant SegmentLoader order 1
participant "media-segment-request" order 2
participant XMLHttpRequest order 3
participant "segment-transmuxer" order 4
participant mux.js order 5
SegmentLoader -> "media-segment-request" : mediaSegmentRequest(...)
"media-segment-request" -> XMLHttpRequest : request for segment/key/init segment
group Request
XMLHttpRequest -> "media-segment-request" : //segment progress//
note over "media-segment-request" #moccasin
If handling partial data,
tries to transmux new
segment bytes.
end note
"media-segment-request" -> SegmentLoader : progressFn(...)
note left
Forwards "progress" events from
the XML HTTP Request.
end note
group Transmux
"media-segment-request" -> "segment-transmuxer" : transmux(...)
"segment-transmuxer" -> mux.js : postMessage(...setAudioAppendStart...)
note left
Used for checking for overlap when
prefixing audio with silence.
end note
"segment-transmuxer" -> mux.js : postMessage(...alignGopsWith...)
note left
Used for aligning gops when overlapping
content (switching renditions) to fix
some browser glitching.
end note
"segment-transmuxer" -> mux.js : postMessage(...push...)
note left
Pushes bytes into the transmuxer pipeline.
end note
"segment-transmuxer" -> mux.js : postMessage(...partialFlush...)
note left #moccasin
Collates any complete frame data
from partial segment and
caches remainder.
end note
"segment-transmuxer" -> mux.js : postMessage(...flush...)
note left
Collates any complete frame data
from segment, caches only data
required between segments.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...trackinfo...)
"segment-transmuxer" -> "media-segment-request" : onTrackInfo(...)
"media-segment-request" -> SegmentLoader : trackInfoFn(...)
note left
Gets whether the segment
has audio and/or video.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...audioTimingInfo...)
"segment-transmuxer" -> "media-segment-request" : onAudioTimingInfo(...)
"mux.js" -> "segment-transmuxer" : postMessage(...videoTimingInfo...)
"segment-transmuxer" -> "media-segment-request" : onVideoTimingInfo(...)
"media-segment-request" -> SegmentLoader : timingInfoFn(...)
note left
Gets the audio/video
start/end times.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...caption...)
"segment-transmuxer" -> "media-segment-request" : onCaptions(...)
"media-segment-request" -> SegmentLoader : captionsFn(...)
note left
Gets captions from transmux.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...id3Frame...)
"segment-transmuxer" -> "media-segment-request" : onId3(...)
"media-segment-request" -> SegmentLoader : id3Fn(...)
note left
Gets metadata from transmux.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...data...)
"segment-transmuxer" -> "media-segment-request" : onData(...)
"media-segment-request" -> SegmentLoader : dataFn(...)
note left
Gets an fmp4 segment
ready to be appended.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...done...)
note left
Gathers GOP info, and calls
done callback.
end note
"segment-transmuxer" -> "media-segment-request" : onDone(...)
"media-segment-request" -> SegmentLoader : doneFn(...)
note left
Queues callbacks on source
buffer queue to wait for
appends to complete.
end note
end
XMLHttpRequest -> "media-segment-request" : //segment request finished//
end
SegmentLoader -> SegmentLoader : handleAppendsDone_()
note left
Saves segment timing info
and starts next request.
end note
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 132 KiB

View File

@ -0,0 +1,36 @@
# Transmux Before Append Changes
## Overview
In moving our transmuxing stage from after append (to a virtual source buffer from videojs-contrib-media-sources) to before appending (to a native source buffer), some changes were required, and others made the logic simpler. What follows are some details into some of the changes made, why they were made, and what impact they will have.
### Source Buffer Creation
In a pre-TBA (transmux before append) world, videojs-contrib-media-source's source buffers provided an abstraction around the native source buffers. They also required a bit more information than the native buffers. For instance, they used the full mime types instead of simply relying on the codec information, when creating the source buffers. This provided the container types, which let the virtual source buffer know whether the media needed to be transmuxed or not. In a post-TBA world, the container type is no longer required, therefore only the codec strings are passed along.
In terms of when the source buffers are created, in the post-TBA world, the creation of source buffers is delayed until we are sure we have all of the information we need. This means that we don't create the native source buffers until the PMT is parsed from the main media. Even if the content is demuxed, we only need to parse the main media, since, for now, we don't rely on codec information from the segment itself, and instead use the manifest-provided codec info, or default codecs. While we could create the source buffers earlier if the codec information is provided in the manifest, delaying provides a simpler, single, code path, and more opportunity for us to be flexible with how much codec info is provided by the attribute. While the HLS specification requires this information, other formats may not, and we have seen content that plays fine but does not adhere to the strict rules of providing all necessary codec information.
### Appending Init Segments
Previously, init segments were handled by videojs-contrib-media-sources for TS segments and segment-loader for FMP4 segments.
videojs-contrib-media-sources and TS:
* video segments
* append the video init segment returned from the transmuxer with every segment
* audio segments
* append the audio init segment returned from the transmuxer only in the following cases:
* first append
* after timestampOffset is set
* audio track events: change/addtrack/removetrack
* 'mediachange' event
segment-loader and FMP4:
* if segment.map is set:
* save (cache) the init segment after the request finished
* append the init segment directly to the source buffer if the segment loader's activeInitSegmentId doesn't match the segment.map generated init segment ID
With the transmux before append and LHLS changes, we only append video init segments on changes as well. This is more important with LHLS, as prepending an init segment before every frame of video would be wasteful.
### Test Changes
Some tests were removed because they were no longer relevant after the change to creating source buffers later. For instance, `waits for both main and audio loaders to finish before calling endOfStream if main loader starting media is unknown` no longer can be tested by waiting for an audio loader response and checking for end of stream, as the test will time out since PlaylistController will wait for track info from the main loader before the source buffers are created. That condition is checked elsewhere.

View File

@ -0,0 +1,29 @@
This doc is just a stub right now. Check back later for updates.
# General
When we talk about video, we normally think about it as one monolithic thing. If you ponder it for a moment though, you'll realize it's actually two distinct sorts of information that are presented to the viewer in tandem: a series of pictures and a sequence of audio samples. The temporal nature of audio and video is shared but the techniques used to efficiently transmit them are very different and necessitate a lot of the complexity in video file formats. Bundling up these (at least) two streams into a single package is the first of many issues introduced by the need to serialize video data and is solved by meta-formats called _containers_.
Containers formats are probably the most recongnizable of the video components because they get the honor of determining the file extension. You've probably heard of MP4, MOV, and WMV, all of which are container formats. Containers specify how to serialize audio, video, and metadata streams into a sequential series of bits and how to unpack them for decoding. Containers are basically a box that can hold video information and timed media data:
![Containers](images/containers.png)
- codecs
- containers, multiplexing
# MPEG2-TS
![MPEG2-TS Structure](images/mp2t-structure.png)
![MPEG2-TS Packet Types](images/mp2t-packet-types.png)
- streaming vs storage
- program table
- program map table
- history, context
# H.264
- NAL units
- Annex B vs MP4 elementary stream
- access unit -> sample
# MP4
- origins: quicktime

View File

@ -0,0 +1,31 @@
# Migration Guide from 2.x to 3.x
## All `hls-` events were removed
All `hls-` prefixed events were removed. If you were listening to any of those events, you should switch the prefix from `hls-` to `vhs-`.
For example, if you were listening to `hls-gap-skip`:
```js
player.tech().on('hls-gap-skip', () => {
console.log('a gap has been skipped');
});
```
you should now listening to `vhs-gap-skip`:
```js
player.tech().on('vhs-gap-skip', () => {
console.log('a gap has been skipped');
});
```
See [VHS Usage Events](../#vhs-usage-events) for more information on these events.
## player properties for accessing VHS
All player level properties to access VHS have been removed.
If you were using any of the following:
* `player.vhs`
* `player.hls`
* `player.dash`
You should switch that to accessing the `vhs` property on the tech like so:
```js
player.tech().vhs
```

75
VApp/node_modules/@videojs/http-streaming/docs/mse.md generated vendored Normal file
View File

@ -0,0 +1,75 @@
# Media Source Extensions Notes
A collection of findings experimenting with Media Source Extensions on
Chrome 36.
* Specifying an audio and video codec when creating a source buffer
but passing in an initialization segment with only a video track
results in a decode error
## ISO Base Media File Format (BMFF)
### Init Segment
A working initialization segment is outlined below. It may be possible
to trim this structure down further.
- `ftyp`
- `moov`
- `mvhd`
- `trak`
- `tkhd`
- `mdia`
- `mdhd`
- `hdlr`
- `minf`
- `mvex`
### Media Segment
The structure of a minimal media segment that actually encapsulates
movie data is outlined below:
- `moof`
- `mfhd`
- `traf`
- `tfhd`
- `tfdt`
- `trun` containing samples
- `mdat`
### Structure
sample: time {number}, data {array}
chunk: samples {array}
track: samples {array}
segment: moov {box}, mdats {array} | moof {box}, mdats {array}, data {array}
track
chunk
sample
movie fragment -> track fragment -> [samples]
### Sample Data Offsets
Movie-fragment Relative Addressing: all trun data offsets are relative
to the containing moof (?).
Without default-base-is-moof, the base data offset for each trun in
trafs after the first is the *end* of the previous traf.
#### iso5/DASH Style
moof
|- traf (default-base-is-moof)
| |- trun_0 <size of moof> + 0
| `- trun_1 <size of moof> + 100
`- traf (default-base-is-moof)
`- trun_2 <size of moof> + 300
mdat
|- samples_for_trun_0 (100 bytes)
|- samples_for_trun_1 (200 bytes)
`- samples_for_trun_2
#### Single Track Style
moof
`- traf
`- trun_0 <size of moof> + 0
mdat
`- samples_for_trun_0

View File

@ -0,0 +1,95 @@
# Multiple Alternative Audio Tracks
## General
m3u8 manifests with multiple audio streams will have those streams added to `video.js` in an `AudioTrackList`. The `AudioTrackList` can be accessed using `player.audioTracks()` or `tech.audioTracks()`.
## Mapping m3u8 metadata to AudioTracks
The mapping between `AudioTrack` and the parsed m3u8 file is fairly straight forward. The table below shows the mapping
| m3u8 | AudioTrack |
|---------|------------|
| label | label |
| lang | language |
| default | enabled |
| ??? | kind |
| ??? | id |
As you can see m3u8's do not have a property for `AudioTrack.id`, which means that we let `video.js` randomly generate the id for `AudioTrack`s. This will have no real impact on any part of the system as we do not use the `id` anywhere.
The other property that does not have a mapping in the m3u8 is `AudioTrack.kind`. It was decided that we would set the `kind` to `main` when `default` is set to `true` and in other cases we set it to `alternative` unless the track has `characteristics` which include `public.accessibility.describes-video`, in which case we set it to `main-desc` (note that this `kind` indicates that the track is a mix of the main track and description, so it can be played *instead* of the main track; a track with kind `description` *only* has the description, not the main track).
Below is a basic example of a mapping
m3u8 layout
``` JavaScript
{
'media-group-1': [{
'audio-track-1': {
default: true,
lang: 'eng'
},
'audio-track-2': {
default: false,
lang: 'fr'
},
'audio-track-3': {
default: false,
lang: 'eng',
characteristics: 'public.accessibility.describes-video'
}
}]
}
```
Corresponding AudioTrackList when media-group-1 is used (before any tracks have been changed)
``` JavaScript
[{
label: 'audio-tracks-1',
enabled: true,
language: 'eng',
kind: 'main',
id: 'random'
}, {
label: 'audio-tracks-2',
enabled: false,
language: 'fr',
kind: 'alternative',
id: 'random'
}, {
label: 'audio-tracks-3',
enabled: false,
language: 'eng',
kind: 'main-desc',
id: 'random'
}]
```
## Startup (how tracks are added and used)
> AudioTrack & AudioTrackList live in video.js
1. `HLS` creates a `PlaylistController` and watches for the `loadedmetadata` event
1. `HLS` parses the m3u8 using the `PlaylistController`
1. `PlaylistController` creates a `PlaylistLoader` for the main m3u8
1. `PlaylistController` creates `PlaylistLoader`s for every audio playlist
1. `PlaylistController` creates a `SegmentLoader` for the main m3u8
1. `PlaylistController` creates a `SegmentLoader` for a potential audio playlist
1. `HLS` sees the `loadedmetadata` and finds the currently selected MediaGroup and all the metadata
1. `HLS` removes all `AudioTrack`s from the `AudioTrackList`
1. `HLS` created `AudioTrack`s for the MediaGroup and adds them to the `AudioTrackList`
1. `HLS` calls `PlaylistController`s `useAudio` with no arguments (causes it to use the currently enabled audio)
1. `PlaylistController` turns off the current audio `PlaylistLoader` if it is on
1. `PlaylistController` maps the `label` to the `PlaylistLoader` containing the audio
1. `PlaylistController` turns on that `PlaylistLoader` and the Corresponding `SegmentLoader` (main or audio only)
1. `MediaSource`/`mux.js` determine how to mux
## How tracks are switched
> AudioTrack & AudioTrackList live in video.js
1. `HLS` is setup to watch for the `changed` event on the `AudioTrackList`
1. User selects a new `AudioTrack` from a menu (where only one track can be enabled)
1. `AudioTrackList` enables the new `Audiotrack` and disables all others
1. `AudioTrackList` triggers a `changed` event
1. `HLS` sees the `changed` event and finds the newly enabled `AudioTrack`
1. `HLS` sends the `label` for the new `AudioTrack` to `PlaylistController`s `useAudio` function
1. `PlaylistController` turns off the current audio `PlaylistLoader` if it is on
1. `PlaylistController` maps the `label` to the `PlaylistLoader` containing the audio
1. `PlaylistController` turns on that `PlaylistLoader` and the Corresponding `SegmentLoader` (main or audio only)
1. `MediaSource`/`mux.js` determine how to mux

View File

@ -0,0 +1,16 @@
# How to get player time from program time
NOTE: See the doc on [Program Time to Player Time](program-time-to-player-time.md) for definitions and an overview of the conversion process.
## Overview
To convert a program time to a player time, the following steps must be taken:
1. Find the right segment by sequentially searching through the playlist until the program time requested is >= the EXT-X-PROGRAM-DATE-TIME of the segment, and < the EXT-X-PROGRAM-DATE-TIME of the following segment (or the end of the playlist is reached).
2. Determine the segment's start and end player times.
To accomplish #2, the segment must be downloaded and transmuxed (right now only TS segments are handled, and TS is always transmuxed to FMP4). This will obtain start and end times post transmuxer modifications. These are the times that the source buffer will recieve and report for the segment's newly created MP4 fragment.
Since there isn't a simple code path for downloading a segment without appending, the easiest approach is to seek to the estimated start time of that segment using the playlist duration calculation function. Because this process is not always accurate (manifest timing values are almost never accurate), a few seeks may be required to accurately seek into that segment.
If all goes well, and the target segment is downloaded and transmuxed, the player time may be found by taking the difference between the requested program time and the EXT-X-PROGRAM-DATE-TIME of the segment, then adding that difference to `segment.videoTimingInfo.transmuxedPresentationStart`.

View File

@ -0,0 +1,47 @@
# Playlist Loader
## Purpose
The [PlaylistLoader][pl] (PL) is responsible for requesting m3u8s, parsing them and keeping track of the media "playlists" associated with the manifest. The [PL] is used with a [SegmentLoader] to load ts or fmp4 fragments from an HLS source.
## Basic Responsibilities
1. To request an m3u8.
2. To parse a m3u8 into a format [videojs-http-streaming][vhs] can understand.
3. To allow selection of a specific media stream.
4. To refresh a live m3u8 for changes.
## Design
### States
![PlaylistLoader States](images/playlist-loader-states.nomnoml.svg)
- `HAVE_NOTHING` the state before the m3u8 is received and parsed.
- `HAVE_MAIN_MANIFEST` the state before a media manifest is parsed and setup but after the main manifest has been parsed and setup.
- `HAVE_METADATA` the state after a media stream is setup.
- `SWITCHING_MEDIA` the intermediary state we go though while changing to a newly selected media playlist
- `HAVE_CURRENT_METADATA` a temporary state after requesting a refresh of the live manifest and before receiving the update
### API
- `load()` this will either start or kick the loader during playback.
- `start()` this will start the [PL] and request the m3u8.
- `media()` this will return the currently active media stream or set a new active media stream.
### Events
- `loadedplaylist` signals the setup of a main playlist, representing the HLS source as a whole, from the m3u8; or a media playlist, representing a media stream.
- `loadedmetadata` signals initial setup of a media stream.
- `playlistunchanged` signals that no changes have been made to a m3u8.
- `mediaupdatetimeout` signals that a live m3u8 and media stream must be refreshed.
- `mediachanging` signals that the currently active media stream is going to be changed.
- `mediachange` signals that the new media stream has been updated.
### Interaction with Other Modules
![PL with PC and MG](images/playlist-loader-pc-mg-sequence.puml.png)
[pl]: ../src/playlist-loader.js
[sl]: ../src/segment-loader.js
[vhs]: intro.md

View File

@ -0,0 +1,141 @@
# How to get program time from player time
## Definitions
NOTE: All times referenced in seconds unless otherwise specified.
*Player Time*: any time that can be gotten/set from player.currentTime() (e.g., any time within player.seekable().start(0) to player.seekable().end(0)).<br />
*Stream Time*: any time within one of the stream's segments. Used by video frames (e.g., dts, pts, base media decode time). While these times natively use clock values, throughout the document the times are referenced in seconds.<br />
*Program Time*: any time referencing the real world (e.g., EXT-X-PROGRAM-DATE-TIME).<br />
*Start of Segment*: the pts (presentation timestamp) value of the first frame in a segment.<br />
## Overview
In order to convert from a *player time* to a *stream time*, an "anchor point" is required to match up a *player time*, *stream time*, and *program time*.
Two anchor points that are usable are the time since the start of a new timeline (e.g., the time since the last discontinuity or start of the stream), and the start of a segment. Because, in our requirements for this conversion, each segment is tagged with its *program time* in the form of an [EXT-X-PROGRAM-DATE-TIME tag](https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.2.6), using the segment start as the anchor point is the easiest solution. It's the closest potential anchor point to the time to convert, and it doesn't require us to track time changes across segments (e.g., trimmed or prepended content).
Those time changes are the result of the transmuxer, which can add/remove content in order to keep the content playable (without gaps or other breaking changes between segments), particularly when a segment doesn't start with a key frame.
In order to make use of the segment start, and to calculate the offset between the segment start and the time to convert, a few properties are needed:
1. The start of the segment before transmuxing
1. Time changes made to the segment during transmuxing
1. The start of the segment after transmuxing
While the start of the segment before and after transmuxing is trivial to retrieve, getting the time changes made during transmuxing is more complicated, as we must account for any trimming, prepending, and gap filling made during the transmux stage. However, the required use-case only needs the position of a video frame, allowing us to ignore any changes made to the audio timeline (because VHS uses video as the timeline of truth), as well as a couple of the video modifications.
What follows are the changes made to a video stream by the transmuxer that could alter the timeline, and if they must be accounted for in the conversion:
* Keyframe Pulling
* Used when: the segment doesn't start with a keyframe.
* Impact: the keyframe with the lowest dts value in the segment is "pulled" back to the first dts value in the segment, and all frames in-between are dropped.
* Need to account in time conversion? No. If a keyframe is pulled, and frames before it are dropped, then the segment will maintain the same segment duration, and the viewer is only seeing the keyframe during that period.
* GOP Fusion
* Used when: the segment doesn't start with a keyframe.
* Impact: if GOPs were saved from previous segment appends, the last GOP will be prepended to the segment.
* Need to account in time conversion? Yes. The segment is artificially extended, so while it shouldn't impact the stream time itself (since it will overlap with content already appended), it will impact the post transmux start of segment.
* GOPS to Align With
* Used when: switching renditions, or appending segments with overlapping GOPs (intersecting time ranges).
* Impact: GOPs in the segment will be dropped until there are no overlapping GOPs with previous segments.
* Need to account in time conversion? No. So long as we aren't switching renditions, and the content is sane enough to not contain overlapping GOPs, this should not have a meaningful impact.
Among the changes, with only GOP Fusion having an impact, the task is simplified. Instead of accounting for any changes to the video stream, only those from GOP Fusion should be accounted for. Since GOP fusion will potentially only prepend frames to the segment, we just need the number of seconds prepended to the segment when offsetting the time. As such, we can add the following properties to each segment:
```
segment: {
// calculated start of segment from either end of previous segment or end of last buffer
// (in stream time)
start,
...
videoTimingInfo: {
// number of seconds prepended by GOP fusion
transmuxerPrependedSeconds
// start of transmuxed segment (in player time)
transmuxedPresentationStart
}
}
```
## The Formula
With the properties listed above, calculating a *program time* from a *player time* is given as follows:
```
const playerTimeToProgramTime = (playerTime, segment) => {
if (!segment.dateTimeObject) {
// Can't convert without an "anchor point" for the program time (i.e., a time that can
// be used to map the start of a segment with a real world time).
return null;
}
const transmuxerPrependedSeconds = segment.videoTimingInfo.transmuxerPrependedSeconds;
const transmuxedStart = segment.videoTimingInfo.transmuxedPresentationStart;
// get the start of the content from before old content is prepended
const startOfSegment = transmuxedStart + transmuxerPrependedSeconds;
const offsetFromSegmentStart = playerTime - startOfSegment;
return new Date(segment.dateTimeObject.getTime() + offsetFromSegmentStart * 1000);
};
```
## Examples
```
// Program Times:
// segment1: 2018-11-10T00:00:30.1Z => 2018-11-10T00:00:32.1Z
// segment2: 2018-11-10T00:00:32.1Z => 2018-11-10T00:00:34.1Z
// segment3: 2018-11-10T00:00:34.1Z => 2018-11-10T00:00:36.1Z
//
// Player Times:
// segment1: 0 => 2
// segment2: 2 => 4
// segment3: 4 => 6
const segment1 = {
dateTimeObject: 2018-11-10T00:00:30.1Z
videoTimingInfo: {
transmuxerPrependedSeconds: 0,
transmuxedPresentationStart: 0
}
};
playerTimeToProgramTime(0.1, segment1);
// startOfSegment = 0 + 0 = 0
// offsetFromSegmentStart = 0.1 - 0 = 0.1
// return 2018-11-10T00:00:30.1Z + 0.1 = 2018-11-10T00:00:30.2Z
const segment2 = {
dateTimeObject: 2018-11-10T00:00:32.1Z
videoTimingInfo: {
transmuxerPrependedSeconds: 0.3,
transmuxedPresentationStart: 1.7
}
};
playerTimeToProgramTime(2.5, segment2);
// startOfSegment = 1.7 + 0.3 = 2
// offsetFromSegmentStart = 2.5 - 2 = 0.5
// return 2018-11-10T00:00:32.1Z + 0.5 = 2018-11-10T00:00:32.6Z
const segment3 = {
dateTimeObject: 2018-11-10T00:00:34.1Z
videoTimingInfo: {
transmuxerPrependedSeconds: 0.2,
transmuxedPresentationStart: 3.8
}
};
playerTimeToProgramTime(4, segment3);
// startOfSegment = 3.8 + 0.2 = 4
// offsetFromSegmentStart = 4 - 4 = 0
// return 2018-11-10T00:00:34.1Z + 0 = 2018-11-10T00:00:34.1Z
```
## Transmux Before Append Changes
Even though segment timing values are retained for transmux before append, the formula does not need to change, as all that matters for calculation is the offset from the transmuxed segment start, which can then be applied to the stream time start of segment, or the program time start of segment.
## Getting the Right Segment
In order to make use of the above calculation, the right segment must be chosen for a given player time. This time may be retrieved by simply using the times of the segment after transmuxing (as the start/end pts/dts values then reflect the player time it should slot into in the source buffer). These are included in `videoTimingInfo` as `transmuxedPresentationStart` and `transmuxedPresentationEnd`.
Although there may be a small amount of overlap due to `transmuxerPrependedSeconds`, as long as the search is sequential from the beginning of the playlist to the end, the right segment will be found, as the prepended times will only come from content from prior segments.

View File

@ -0,0 +1,43 @@
# Using the reloadSourceOnError Plugin
Call the plugin to activate it:
```js
player.reloadSourceOnError()
```
Now if the player encounters a fatal error during playback, it will automatically
attempt to reload the current source. If the error was caused by a transient
browser or networking problem, this can allow playback to continue with a minimum
of disruption to your viewers.
The plugin will only restart your player once in a 30 second time span so that your
player doesn't get into a reload loop if it encounters non-transient errors. You
can tweak the amount of time required between restarts by adjusting the
`errorInterval` option.
If your video URLs are time-sensitive, the original source could be invalid by the
time an error occurs. If that's the case, you can provide a `getSource` callback
to regenerate a valid source object. In your callback, the `this` keyword is a
reference to the player that errored. The first argument to `getSource` is a
function. Invoke that function and pass in your new source object when you're ready.
```js
player.reloadSourceOnError({
// getSource allows you to override the source object used when an error occurs
getSource: function(reload) {
console.log('Reloading because of an error');
// call reload() with a fresh source object
// you can do this step asynchronously if you want (but the error dialog will
// show up while you're waiting)
reload({
src: 'https://example.com/index.m3u8?token=abc123ef789',
type: 'application/x-mpegURL'
});
},
// errorInterval specifies the minimum amount of seconds that must pass before
// another reload will be attempted
errorInterval: 5
});
```

View File

@ -0,0 +1,288 @@
# Supported Features
## Browsers
Any browser that supports [MSE] (media source extensions). See
https://caniuse.com/#feat=mediasource
Note that browsers with native HLS support may play content with the native player, unless
the [overrideNative] option is used. Some notable browsers with native HLS players are:
* Safari (macOS and iOS)
* Chrome Android
* Firefox Android
However, due to the limited features offered by some of the native players, the only
browser on which VHS defaults to using the native player is Safari (macOS and iOS).
## Streaming Formats and Media Types
### Streaming Formats
VHS aims to be mostly streaming format agnostic. So long as the manifest can be parsed to
a common JSON representation, VHS should be able to play it. However, due to some large
differences between the major streaming formats (HLS and DASH), some format specific code
is included in VHS. If you have another format you would like supported, please reach out
to us (e.g., file an issue).
* [HLS] (HTTP Live Streaming)
* [MPEG-DASH] (Dynamic Adaptive Streaming over HTTP)
### Media Container Formats
* [TS] (MPEG Transport Stream)
* [MP4] (MPEG-4 Part 14: MP4, M4A, M4V, M4S, MPA), ISOBMFF
* [AAC] (Advanced Audio Coding)
### Codecs
If the content is packaged in an [MP4] container, then any codec supported by the browser
is supported. If the content is packaged in a [TS] container, then the codec must be
supported by [the transmuxer]. The following codecs are supported by the transmuxer:
* [AVC] (Advanced Video Coding, h.264)
* [AVC1] (Advnced Video Coding, h.265)
* [HE-AAC] (High Efficiency Advanced Audio Coding, mp4a.40.5)
* LC-AAC (Low Complexity Advanced Audio Coding, mp4a.40.2)
## General Notable Features
The following is a list of some, but not all, common streaming features supported by VHS.
It is meant to highlight some common use cases (and provide for easy searching), but is
not meant serve as an exhaustive list.
* VOD (video on demand)
* LIVE
* Multiple audio tracks
* Timed [ID3] Metadata is automatically translated into HTML5 metedata text tracks
* Cross-domain credentials support with [CORS]
* Any browser supported resolution (e.g., 4k)
* Any browser supported framerate (e.g., 60fps)
* [DRM] via [videojs-contrib-eme]
* Audio only (non DASH)
* Video only (non DASH)
* In-manifest [WebVTT] subtitles are automatically translated into standard HTML5 subtitle
tracks
* [AES-128] segment encryption
* DASH In-manifest EventStream and Event tags are automatically translated into HTML5 metadata cues
* [Content Steering](content-steering.md) for both HLS and DASH.
## Notable Missing Features
Note that the following features have not yet been implemented or may work but are not
currently suppported in browsers that do not rely on the native player. For browsers that
use the native player (e.g., Safari for HLS), please refer to their documentation.
### Container Formats
* [WebM]
* [WAV]
* [MP3]
* [OGG]
### Codecs
If the content is packaged within an [MP4] container and the browser supports the codec, it
will play. However, the following are some codecs that are not routinely tested, or are not
supported when packaged within [TS].
* [MP3]
* [Vorbis]
* [WAV]
* [FLAC]
* [Opus]
* [VP8]
* [VP9]
* [Dolby Vision] (DVHE)
* [Dolby Digital] Audio (AC-3)
* [Dolby Digital Plus] (E-AC-3)
### HLS Missing Features
Note: features for low latency HLS in the [2nd edition of HTTP Live Streaming] are on the
roadmap, but not currently available.
VHS strives to support all of the features in the HLS specification, however, some have
not yet been implemented. VHS currently supports everything in the
[HLS specification v7, revision 23], except the following:
* Use of [EXT-X-MAP] with [TS] segments
* [EXT-X-MAP] is currently supported for [MP4] segments, but not yet for TS
* I-Frame playlists via [EXT-X-I-FRAMES-ONLY] and [EXT-X-I-FRAME-STREAM-INF]
* [MP3] Audio
* [Dolby Digital] Audio (AC-3)
* [Dolby Digital Plus] Audio (E-AC-3)
* KEYFORMATVERSIONS of [EXT-X-KEY]
* [EXT-X-DATERANGE]
* [EXT-X-SESSION-DATA]
* [EXT-X-SESSION-KEY]
* Alternate video via [EXT-X-MEDIA] of type video
* ASSOC-LANGUAGE in [EXT-X-MEDIA]
* CHANNELS in [EXT-X-MEDIA]
* Use of AVERAGE-BANDWIDTH in [EXT-X-STREAM-INF] (value parsed but not used)
* Use of FRAME-RATE in [EXT-X-STREAM-INF] (value parsed but not used)
* Use of HDCP-LEVEL in [EXT-X-STREAM-INF]
* SAMPLE-AES segment encryption
In the event of encoding changes within a playlist (see
https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-6.3.3), the
behavior will depend on the browser.
### DASH Missing Features
DASH support is more recent than HLS support in VHS, however, VHS strives to achieve as
complete compatibility as possible with the DASH spec. The following are some notable
features in the DASH specification that are not yet implemented in VHS:
Note that many of the following are parsed by [mpd-parser] but are either not yet used, or
simply take on their default values (in the case where they have valid defaults).
* Audio and video only streams
* Audio rendition switching
* Each video rendition is paired with an audio rendition for the duration of playback.
* MPD
* @id
* @profiles
* @availabilityStartTime
* @availabilityEndTime
* @minBufferTime
* @maxSegmentDuration
* @maxSubsegmentDuration
* ProgramInformation
* Metrics
* Period
* @xlink:href
* @xlink:actuate
* @id
* @duration
* Normally used for determing the PeriodStart of the next period, VHS instead relies
on segment durations to determine timing of each segment and timeline
* @bitstreamSwitching
* Subset
* AdaptationSet
* @xlink:href
* @xlink:actuate
* @id
* @group
* @par (picture aspect ratio)
* @minBandwidth
* @maxBandwidth
* @minWidth
* @maxWidth
* @minHeight
* @maxHeight
* @minFrameRate
* @maxFrameRate
* @segmentAlignment
* @bitstreamSwitching
* @subsegmentAlignment
* @subsegmentStartsWithSAP
* Accessibility
* Rating
* Viewpoint
* ContentComponent
* Representation
* @id (used for SegmentTemplate but not exposed otherwise)
* @qualityRanking
* @dependencyId (dependent representation)
* @mediaStreamStructureId
* SubRepresentation
* CommonAttributesElements (for AdaptationSet, Representation and SubRepresentation elements)
* @profiles
* @sar
* @frameRate
* @audioSamplingRate
* @segmentProfiles
* @maximumSAPPeriod
* @startWithSAP
* @maxPlayoutRate
* @codingDependency
* @scanType
* FramePacking
* AudioChannelConfiguration
* SegmentBase
* @presentationTimeOffset
* @indexRangeExact
* RepresentationIndex
* MultipleSegmentBaseInformation elements
* SegmentList
* @xlink:href
* @xlink:actuate
* MultipleSegmentBaseInformation
* SegmentURL
* @index
* @indexRange
* SegmentTemplate
* MultipleSegmentBaseInformation
* @index
* @bitstreamSwitching
* BaseURL
* @serviceLocation
* Template-based Segment URL construction
* Live DASH assets that use $Time$ in a SegmentTemplate, and also have a SegmentTimeline
where only the first S has a t and the rest only have a d do not update on playlist
refreshes
See: https://github.com/videojs/http-streaming#dash-assets-with-time-interpolation-and-segmenttimelines-with-no-t
* ContentComponent elements
* Right now manifests are assumed to have a single content component, with the properties
described directly on the AdaptationSet element
* SubRepresentation elements
* Subset elements
* Early Available Periods (may work, but has not been tested)
* Access to subsegments via a subsegment index ('ssix')
* The @profiles attribute is ignored (best support for all profiles is attempted, without
consideration of the specific profile). For descriptions on profiles, see section 8 of
the DASH spec.
* Construction of byte range URLs via a BaseURL byteRange template (Annex E.2)
* Multiperiod content where the representation sets are not the same across periods
* In the event that an S element has a t attribute that is greater than what is expected,
it is not treated as a discontinuity, but instead retains its segment value, and may
result in a gap in the content
[MSE]: https://www.w3.org/TR/media-source/
[HLS]: https://en.wikipedia.org/wiki/HTTP_Live_Streaming
[MPEG-DASH]: https://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP
[TS]: https://en.wikipedia.org/wiki/MPEG_transport_stream
[MP4]: https://en.wikipedia.org/wiki/MPEG-4_Part_14
[AAC]: https://en.wikipedia.org/wiki/Advanced_Audio_Coding
[AVC]: https://en.wikipedia.org/wiki/Advanced_Video_Coding
[AVC1]: https://en.wikipedia.org/wiki/Advanced_Video_Coding
[HE-AAC]: https://en.wikipedia.org/wiki/High-Efficiency_Advanced_Audio_Coding
[ID3]: https://en.wikipedia.org/wiki/ID3
[CORS]: https://en.wikipedia.org/wiki/Cross-origin_resource_sharing
[DRM]: https://en.wikipedia.org/wiki/Digital_rights_management
[WebVTT]: https://www.w3.org/TR/webvtt1/
[AES-128]: https://en.wikipedia.org/wiki/Advanced_Encryption_Standard
[WebM]: https://en.wikipedia.org/wiki/WebM
[WAV]: https://en.wikipedia.org/wiki/WAV
[MP3]: https://en.wikipedia.org/wiki/MP3
[OGG]: https://en.wikipedia.org/wiki/Ogg
[Vorbis]: https://en.wikipedia.org/wiki/Vorbis
[FLAC]: https://en.wikipedia.org/wiki/FLAC
[Opus]: https://en.wikipedia.org/wiki/Opus_(audio_format)
[VP8]: https://en.wikipedia.org/wiki/VP8
[VP9]: https://en.wikipedia.org/wiki/VP9
[overrideNative]: https://github.com/videojs/http-streaming#overridenative
[the transmuxer]: https://github.com/videojs/mux.js
[videojs-contrib-eme]: https://github.com/videojs/videojs-contrib-eme
[2nd edition of HTTP Live Streaming]: https://tools.ietf.org/html/draft-pantos-hls-rfc8216bis-07.html
[HLS specification v7, revision 23]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23
[EXT-X-MAP]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.2.5
[EXT-X-STREAM-INF]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.4.2
[EXT-X-SESSION-DATA]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.4.4
[EXT-X-DATERANGE]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.2.7
[EXT-X-KEY]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.2.7
[EXT-X-I-FRAMES-ONLY]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.3.6
[EXT-X-I-FRAME-STREAM-INF]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.4.3
[EXT-X-SESSION-KEY]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.4.5
[EXT-X-START]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.5.2
[EXT-X-MEDIA]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.4.1
[Dolby Vision]: https://en.wikipedia.org/wiki/High-dynamic-range_video#Dolby_Vision
[Dolby Digital]: https://en.wikipedia.org/wiki/Dolby_Digital
[Dolby Digital Plus]: https://en.wikipedia.org/wiki/Dolby_Digital_Plus
[mpd-parser]: https://github.com/videojs/mpd-parser

View File

@ -0,0 +1,67 @@
# Troubleshooting Guide
## Other troubleshooting guides
For issues around data embedded into media segments (e.g., 608 captions), see the [mux.js troubleshooting guide](https://github.com/videojs/mux.js/blob/main/docs/troubleshooting.md).
## Tools
### Thumbcoil
Thumbcoil is a video inspector tool that can unpackage various media containers and inspect the bitstreams therein. Thumbcoil runs entirely within your browser so that none of your video data is ever transmitted to a server.
http://thumb.co.il<br/>
http://beta.thumb.co.il<br/>
https://github.com/videojs/thumbcoil<br/>
## Table of Contents
- [Content plays on Mac but not on Windows](#content-plays-on-mac-but-not-windows)
- ["No compatible source was found" on IE11 Win 7](#no-compatible-source-was-found-on-ie11-win-7)
- [CORS: No Access-Control-Allow-Origin header](#cors-no-access-control-allow-origin-header)
- [Desktop Safari/iOS Safari/Android Chrome/Edge exhibit different behavior from other browsers](#desktop-safariios-safariandroid-chromeedge-exhibit-different-behavior-from-other-browsers)
- [MEDIA_ERR_DECODE error on Desktop Safari](#media_err_decode-error-on-desktop-safari)
- [Network requests are still being made while paused](#network-requests-are-still-being-made-while-paused)
## Content plays on Mac but not Windows
Some browsers may not be able to play audio sample rates higher than 48 kHz. See https://docs.microsoft.com/en-gb/windows/desktop/medfound/aac-decoder#format-constraints
Potential solution: re-encode with a Windows supported audio sample rate
## "No compatible source was found" on IE11 Win 7
videojs-http-streaming does not support Flash HLS playback (like the videojs-contrib-hls plugin does)
Solution: include the FlasHLS source handler https://github.com/brightcove/videojs-flashls-source-handler#usage
## CORS: No Access-Control-Allow-Origin header
If you see an error along the lines of
```
XMLHttpRequest cannot load ... No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin ... is therefore not allowed access.
```
you need to properly configure CORS on your server: https://github.com/videojs/http-streaming#hosting-considerations
## Desktop Safari/iOS Safari/Android Chrome/Edge exhibit different behavior from other browsers
Some browsers support native playback of certain streaming formats. By default, we defer to the native players. However, this means that features specific to videojs-http-streaming will not be available.
On Edge and mobile Chrome, 608 captions, ID3 tags or live streaming may not work as expected with native playback, it is recommended that `overrideNative` be used on those platforms if necessary.
Solution: use videojs-http-streaming based playback on those devices: https://github.com/videojs/http-streaming#overridenative
## MEDIA_ERR_DECODE error on Desktop Safari
This error may occur for a number of reasons, as it is particularly common for misconfigured content. One instance of misconfiguration is if the source manifest has `CLOSED-CAPTIONS=NONE` and an external text track is loaded into the player. Safari does not allow the inclusion any captions if the manifest indicates that captions will not be provided.
Solution: remove `CLOSED-CAPTIONS=NONE` from the manifest
## Network requests are still being made while paused
There are a couple of cases where network requests will still be made by VHS when the video is paused.
1) If the forward buffer (buffered content ahead of the playhead) has not reached the GOAL\_BUFFER\_LENGTH. For instance, if the playhead is at time 10 seconds, the buffered range goes from 5 seconds to 20 seconds, and the GOAL\_BUFFER\_LENGTH is set to 30 seconds, then segments will continue to be requested, even while paused, until the buffer ends at a time greater than or equal to 10 seconds (current time) + 30 seconds (GOAL\_BUFFER\_LENGTH) = 40 seconds. This is expected behavior in order to provide a better playback experience.
2) If the stream is LIVE, then the manifest will continue to be refreshed even while paused. This is because it is easier to keep playback in sync if we receieve manifest updates consistently.

313
VApp/node_modules/@videojs/http-streaming/index.html generated vendored Normal file
View File

@ -0,0 +1,313 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>videojs-http-streaming Demo</title>
<link rel="icon" href="logo.svg">
<link href="node_modules/bootstrap/dist/css/bootstrap.css" rel="stylesheet">
<link href="node_modules/video.js/dist/video-js.css" rel="stylesheet">
<link rel="stylesheet" href="node_modules/videojs-contrib-quality-menu/dist/videojs-contrib-quality-menu.css">
<style>
.form-check {
background-color: hsl(0, 0%, 90%);
margin-block: 0.5rem;
padding: 0.25em 0.25em 0.25em 1.75em;
width: 700px;
width: fit-content;
}
#player-fixture {
min-height: 250px;
}
#segment-metadata {
list-style: none;
}
#segment-metadata pre {
overflow: scroll;
}
button.btn-outline-secondary:hover svg,
button.btn-success svg,
button.btn-danger svg {
fill: white;
}
</style>
</head>
<body class="m-4">
<header class="container-fluid">
<a href="https://github.com/videojs/http-streaming" class="d-flex align-items-center pb-3 mb-5 border-bottom" style="height: 4em">
<img src="./logo.svg" alt="VHS logo showcasing a VHS tape with the Video.js logo on the label" class="rounded mh-100">
<span class="fs-1 ps-2">VHS: videojs-http-streaming</span>
</a>
</header>
<div id="player-fixture" class="container-fluid pb-3 mb-3"></div>
<ul class="nav nav-tabs container-fluid mb-3" id="myTab" role="tablist">
<li class="nav-item" role="presentation">
<button class="nav-link active" id="home-tab" data-bs-toggle="tab" data-bs-target="#sources" type="button" role="tab" aria-selected="true">Sources</button>
</li>
<li class="nav-item" role="presentation">
<button class="nav-link" id="contact-tab" data-bs-toggle="tab" data-bs-target="#options" type="button" role="tab" aria-selected="false">Options</button>
</li>
<li class="nav-item" role="presentation">
<button class="nav-link" id="profile-tab" data-bs-toggle="tab" data-bs-target="#levels" type="button" role="tab" aria-selected="false">Representations</button>
</li>
<li class="nav-item" role="presentation">
<button class="nav-link" id="profile-tab" data-bs-toggle="tab" data-bs-target="#player-stats" type="button" role="tab" aria-selected="false">Player Stats</button>
</li>
<li class="nav-item" role="presentation">
<button class="nav-link" id="profile-tab" data-bs-toggle="tab" data-bs-target="#content-steering" type="button" role="tab" aria-selected="false">Content Steering</button>
</li>
<li class="nav-item" role="presentation">
<button class="nav-link" id="profile-tab" data-bs-toggle="tab" data-bs-target="#export-logs" type="button" role="tab" aria-selected="false">Logs</button>
</li>
</ul>
<div class="tab-content container-fluid">
<div class="tab-pane active" id="sources" role="tabpanel">
<div class="input-group mb-2">
<span class="input-group-text"><label for=load-source>Preloaded Sources</label></span>
<select id=load-source class="form-select">
<optgroup label="hls">
</optgroup>
<optgroup label="dash">
</optgroup>
<optgroup label="drm">
</optgroup>
<optgroup label="live">
</optgroup>
<optgroup label="low latency live">
</optgroup>
<optgroup label="json manifest object">
</optgroup>
</select>
</div>
<label for=url class="form-label">Source URL</label>
<div class="input-group">
<span class="input-group-text"><label for=url>Url</label></span>
<input id=url type=url class="form-control">
</div>
<label for=type class="form-label">Source Type (uses url extension if blank, usually application/x-mpegURL or application/dash+xml)</label>
<div class="input-group">
<span class="input-group-text"><label for=type>Type</label></span>
<input id=type type=text class="form-control">
</div>
<label for="keysystems" class="form-label">Optional Keystems JSON:</label>
<div class="input-group">
<span class="input-group-text"><label for=keysystems>keySystems JSON</label></span>
<textarea id=keysystems cols=100 rows=5 class="form-control"></textarea>
</div>
<button id=load-url type=button class="btn btn-primary my-2">Load</button>
</div>
<div class="tab-pane" id="options" role="tabpanel">
<div class="options">
<div class="form-check">
<input id=minified type="checkbox" class="form-check-input">
<label class="form-check-label" for="minified">Minified VHS (reloads player)</label>
</div>
<div class="form-check">
<input id=sync-workers type="checkbox" class="form-check-input">
<label class="form-check-label" for="sync-workers">Synchronous Web Workers (reloads player)</label>
</div>
<div class="form-check">
<input id=liveui type="checkbox" class="form-check-input" checked>
<label class="form-check-label" for="liveui">Enable the live UI (reloads player)</label>
</div>
<div class="form-check">
<input id=fluid type="checkbox" class="form-check-input">
<label class="form-check-label" for="fluid">Fluid mode</label>
</div>
<div class="form-check">
<input id=debug type="checkbox" class="form-check-input">
<label class="form-check-label" for="debug">Debug Logging</label>
</div>
<div class="form-check">
<input id=muted type="checkbox" class="form-check-input">
<label class="form-check-label" for="muted">Muted</label>
</div>
<div class="form-check">
<input id=autoplay type="checkbox" class="form-check-input">
<label class="form-check-label" for="autoplay">Autoplay</label>
</div>
<div class="form-check">
<input id=network-info type="checkbox" class="form-check-input" checked>
<label class="form-check-label" for="network-info">Use networkInfo API for bandwidth estimations (reloads player)</label>
</div>
<div class="form-check">
<input id=dts-offset type="checkbox" class="form-check-input">
<label class="form-check-label" for="dts-offset">Use DTS instead of PTS for Timestamp Offset calculation (reloads player)</label>
</div>
<div class="form-check">
<input id=llhls type="checkbox" class="form-check-input">
<label class="form-check-label" for="llhls">[EXPERIMENTAL] Enables support for ll-hls (reloads player)</label>
</div>
<div class="form-check">
<input id=buffer-water type="checkbox" class="form-check-input">
<label class="form-check-label" for="buffer-water">[EXPERIMENTAL] Use Buffer Level for ABR (reloads player)</label>
</div>
<div class="form-check">
<input id=exact-manifest-timings type="checkbox" class="form-check-input">
<label class="form-check-label" for="exact-manifest-timings">[EXPERIMENTAL] Use exact manifest timings for segment choices (reloads player)</label>
</div>
<div class="form-check">
<input id=pixel-diff-selector type="checkbox" class="form-check-input">
<label class="form-check-label" for="pixel-diff-selector">[EXPERIMENTAL] Use the Pixel difference resolution selector (reloads player)</label>
</div>
<div class="form-check">
<input id=object-fit type="checkbox" class="form-check-input">
<label class="form-check-label" for="object-fit">Account Object-fit for resolution selection (reloads player)</label>
</div>
<div class="form-check">
<input id=override-native type="checkbox" class="form-check-input" checked>
<label class="form-check-label" for="override-native">Override Native (reloads player)</label>
</div>
<div class="form-check">
<input id=use-mms type="checkbox" class="form-check-input" checked>
<label class="form-check-label" for="use-mms">[EXPERIMENTAL] Use ManagedMediaSource if available. Use in combination with override native (reloads player)</label>
</div>
<div class="form-check">
<input id=mirror-source type="checkbox" class="form-check-input" checked>
<label class="form-check-label" for="mirror-source">Mirror sources from player.src (reloads player, uses EXPERIMENTAL sourceset option)</label>
</div>
<div class="form-check">
<input id="forced-subtitles" type="checkbox" class="form-check-input">
<label class="form-check-label" for="forced-subtitles">Use Forced Subtitles (reloads player)</label>
</div>
<div class="form-check">
<input id="native-text-tracks" type="checkbox" class="form-check-input">
<label class="form-check-label" for="native-text-tracks">Use native text tracks (reloads player)</label>
</div>
<div class="input-group">
<span class="input-group-text"><label for=preload>Preload (reloads player)</label></span>
<select id=preload class="form-select">
<option selected>auto</option>
<option>none</option>
<option>metadata</option>
</select>
</div>
</div>
</div>
<div class="tab-pane" id="levels" role="tabpanel">
<div class="input-group">
<span class="input-group-text"><label for=representations>Representations</label></span>
<select id='representations' class="form-select"></select>
</div>
</div>
<div class="tab-pane" id="player-stats" role="tabpanel">
<div class="row">
<div class="player-stats col-4">
<dl>
<dt>Current Time:</dt>
<dd class="current-time-stat">0</dd>
<dt>Buffered:</dt>
<dd class="buffered-stat">-</dd>
<dt>Video Buffered:</dt>
<dd class="video-buffered-stat">-</dd>
<dt>Audio Buffered:</dt>
<dd class="audio-buffered-stat">-</dd>
<dt>Seekable:</dt>
<dd><span class="seekable-start-stat">-</span> - <span class="seekable-end-stat">-</span></dd>
<dt>Video Bitrate:</dt>
<dd class="video-bitrate-stat">0 kbps</dd>
<dt>Measured Bitrate:</dt>
<dd class="measured-bitrate-stat">0 kbps</dd>
<dt>Video Timestamp Offset</dt>
<dd class="video-timestampoffset">0</dd>
<dt>Audio Timestamp Offset</dt>
<dd class="audio-timestampoffset">0</dd>
</dl>
</div>
<ul id="segment-metadata" class="col-8"></ul>
</div>
</div>
<div class="tab-pane" id="content-steering" role="tabpanel">
<div class="row">
<div class="content-steering col-8">
<dl>
<dt>Current Pathway:</dt>
<dd class="current-pathway"></dd>
<dt>Available Pathways:</dt>
<dd class="available-pathways"></dd>
<dt>Steering Manifest:</dt>
<dd class="steering-manifest"></dd>
</dl>
</div>
</div>
</div>
<div class="tab-pane" id="export-logs" role="historypanel">
<div class="row">
<div class="export-logs col-8">
<p>Download or copy the player logs, which should be included when submitting a playback issue.</p>
<p>To insert a comment into the player log, use <code>player.log()</code> in the console, e.g. <code>player.log('Seeking to 500');player.currentTime(500);</code></p>
<button id="download-logs" class="btn btn-outline-secondary">
<span class="icon">
<svg xmlns="http://www.w3.org/2000/svg" height="24px" viewBox="0 -960 960 960" width="24px" fill="#5f6368">
<path d="M480-320 280-520l56-58 104 104v-326h80v326l104-104 56 58-200 200ZM240-160q-33 0-56.5-23.5T160-240v-120h80v120h480v-120h80v120q0 33-23.5 56.5T720-160H240Z"/>
</svg>
</span>
Download player logs
</button>
<button id="copy-logs" class="btn btn-outline-secondary">
<span class="icon">
<svg xmlns="http://www.w3.org/2000/svg" height="24px" viewBox="0 -960 960 960" width="24px" fill="#5f6368">
<path d="M360-240q-33 0-56.5-23.5T280-320v-480q0-33 23.5-56.5T360-880h360q33 0 56.5 23.5T800-800v480q0 33-23.5 56.5T720-240H360Zm0-80h360v-480H360v480ZM200-80q-33 0-56.5-23.5T120-160v-560h80v560h440v80H200Zm160-240v-480 480Z"/>
</svg>
</span>
Copy player logs to clipboard
</button>
</div>
</div>
</div>
</div>
<footer class="text-center p-3" id=unit-test-link>
<a href="test/debug.html">Run unit tests</a>
</footer>
<script>
var unitTestLink = document.getElementById('unit-test-link');
// removal test run link on netlify, as we cannot run tests there.
if ((/netlify.app/).test(window.location.host)) {
unitTestLink.remove();
}
</script>
<script src="node_modules/bootstrap/dist/js/bootstrap.js"></script>
<script src="scripts/index.js"></script>
<script>
window.startDemo(function(player) {
// do something with setup player
});
</script>
</body>
</html>

115
VApp/node_modules/@videojs/http-streaming/package.json generated vendored Normal file
View File

@ -0,0 +1,115 @@
{
"name": "@videojs/http-streaming",
"version": "3.17.0",
"description": "Play back HLS and DASH with Video.js, even where it's not natively supported",
"main": "dist/videojs-http-streaming.cjs.js",
"module": "dist/videojs-http-streaming.es.js",
"repository": {
"type": "git",
"url": "git@github.com:videojs/http-streaming.git"
},
"scripts": {
"prenetlify": "npm run build",
"netlify": "node scripts/netlify.js",
"build-test": "cross-env-shell TEST_BUNDLE_ONLY=1 'npm run build'",
"build-prod": "cross-env-shell NO_TEST_BUNDLE=1 'npm run build'",
"build": "npm-run-all -s clean -p build:*",
"build:js": "rollup -c scripts/rollup.config.js",
"docs": "npm-run-all docs:*",
"docs:api": "jsdoc src -g plugins/markdown -r -d docs/api",
"docs:toc": "doctoc --notitle README.md",
"docs:images:puml": "for i in docs/images/sources/*.puml; do npx water-uml export $i -f png -o \"docs/images/$(echo $i | cut -d '/' -f 4)\"; done",
"docs:images:nomnoml": "node ./scripts/create-docs-images.js",
"docs:images": "npm-run-all -p docs:images:puml docs:images:nomnoml",
"clean": "shx rm -rf ./dist ./test/dist && shx mkdir -p ./dist ./test/dist",
"lint": "vjsstandard",
"prepublishOnly": "npm-run-all build-prod && vjsverify --verbose --skip-es-check",
"start": "npm-run-all -p server watch",
"server": "karma start scripts/karma.conf.js --singleRun=false --auto-watch",
"test": "npm-run-all lint build-test && karma start scripts/karma.conf.js",
"posttest": "[ \"$CI_TEST_TYPE\" != 'coverage' ] || shx cat test/dist/coverage/text.txt",
"version": "vjs-update-changelog --add --run-on-prerelease",
"watch": "npm-run-all -p watch:*",
"watch:js": "npm run build:js -- -w"
},
"keywords": [
"videojs",
"videojs-plugin"
],
"author": "Brightcove, Inc",
"license": "Apache-2.0",
"vjsstandard": {
"ignore": [
"dist",
"docs",
"deploy",
"test/dist",
"utils",
"src/*.worker.js"
]
},
"files": [
"CONTRIBUTING.md",
"dist/",
"docs/",
"index.html",
"scripts/",
"src/"
],
"dependencies": {
"@babel/runtime": "^7.12.5",
"@videojs/vhs-utils": "^4.1.1",
"aes-decrypter": "^4.0.2",
"global": "^4.4.0",
"m3u8-parser": "^7.2.0",
"mpd-parser": "^1.3.1",
"mux.js": "7.1.0",
"video.js": "^7 || ^8"
},
"peerDependencies": {
"video.js": "^8.19.0"
},
"devDependencies": {
"@babel/cli": "^7.21.0",
"@popperjs/core": "^2.11.7",
"@rollup/plugin-replace": "^2.3.4",
"@rollup/plugin-strip": "^2.0.1",
"@videojs/generator-helpers": "~3.1.0",
"bootstrap": "^5.1.0",
"d3": "^3.4.8",
"jsdoc": "^3.6.11",
"karma": "^6.4.0",
"lodash": "^4.17.4",
"lodash-compat": "^3.10.0",
"nomnoml": "^1.5.2",
"rollup": "^2.36.1",
"rollup-plugin-worker-factory": "0.5.7",
"shelljs": "^0.8.5",
"sinon": "^8.1.1",
"url-toolkit": "^2.2.1",
"videojs-contrib-eme": "^5.3.1",
"videojs-contrib-quality-levels": "^4.0.0",
"videojs-contrib-quality-menu": "^1.0.3",
"videojs-generate-karma-config": "^8.0.1",
"videojs-generate-rollup-config": "^7.0.0",
"videojs-generator-verify": "~3.0.1",
"videojs-standard": "^9.1.0",
"water-plant-uml": "^2.0.2"
},
"generator-videojs-plugin": {
"version": "7.6.3"
},
"engines": {
"node": ">=8",
"npm": ">=5"
},
"husky": {
"hooks": {
"pre-commit": "lint-staged"
}
},
"lint-staged": {
"*.js": "vjsstandard --fix",
"README.md": "doctoc --notitle"
}
}

View File

@ -0,0 +1,31 @@
/* eslint-disable no-console */
const nomnoml = require('nomnoml');
const fs = require('fs');
const path = require('path');
const basePath = path.resolve(__dirname, '..');
const docImageDir = path.join(basePath, 'docs/images');
const nomnomlSourceDir = path.join(basePath, 'docs/images/sources');
const buildImages = {
build() {
const files = fs.readdirSync(nomnomlSourceDir);
while (files.length > 0) {
const file = path.resolve(nomnomlSourceDir, files.shift());
const basename = path.basename(file, 'txt');
if (/.nomnoml/.test(basename)) {
const fileContents = fs.readFileSync(file, 'utf-8');
const generated = nomnoml.renderSvg(fileContents);
const newFilePath = path.join(docImageDir, basename) + 'svg';
const outFile = fs.createWriteStream(newFilePath);
console.log(`wrote file ${newFilePath}`);
outFile.write(generated);
}
}
}
};
buildImages.build();

View File

@ -0,0 +1,115 @@
/* global window */
const fs = require('fs');
const path = require('path');
const baseDir = path.join(__dirname, '..');
const manifestsDir = path.join(baseDir, 'test', 'manifests');
const segmentsDir = path.join(baseDir, 'test', 'segments');
const base64ToUint8Array = function(base64) {
const decoded = window.atob(base64);
const uint8Array = new Uint8Array(new ArrayBuffer(decoded.length));
for (let i = 0; i < decoded.length; i++) {
uint8Array[i] = decoded.charCodeAt(i);
}
return uint8Array;
};
const getManifests = () => (fs.readdirSync(manifestsDir) || [])
.filter((f) => ((/\.(m3u8|mpd)/).test(path.extname(f))))
.map((f) => path.resolve(manifestsDir, f));
const getSegments = () => (fs.readdirSync(segmentsDir) || [])
.filter((f) => ((/\.(ts|mp4|key|webm|aac|ac3|vtt)/).test(path.extname(f))))
.map((f) => path.resolve(segmentsDir, f));
const buildManifestString = function() {
let manifests = 'export default {\n';
getManifests().forEach((file) => {
// translate this manifest
manifests += ' \'' + path.basename(file, path.extname(file)) + '\': ';
manifests += fs.readFileSync(file, 'utf8')
.split(/\r\n|\n/)
// quote and concatenate
.map((line) => ' \'' + line + '\\n\' +\n')
.join('')
// strip leading spaces and the trailing '+'
.slice(4, -3);
manifests += ',\n';
});
// clean up and close the objects
manifests = manifests.slice(0, -2);
manifests += '\n};\n';
return manifests;
};
const buildSegmentString = function() {
const segmentData = {};
getSegments().forEach((file) => {
// read the file directly as a buffer before converting to base64
const base64Segment = fs.readFileSync(file).toString('base64');
segmentData[path.basename(file, path.extname(file))] = base64Segment;
});
const segmentDataExportStrings = Object.keys(segmentData).reduce((acc, key) => {
// use a function since the segment may be cleared out on usage
acc.push(`export const ${key} = () => {
cache.${key} = cache.${key} || base64ToUint8Array('${segmentData[key]}');
const dest = new Uint8Array(cache.${key}.byteLength);
dest.set(cache.${key});
return dest;
};`);
return acc;
}, []);
const segmentsFile =
'const cache = {};\n' +
`const base64ToUint8Array = ${base64ToUint8Array.toString()};\n` +
segmentDataExportStrings.join('\n');
return segmentsFile;
};
/* we refer to them as .js, so that babel and other plugins can work on them */
const segmentsKey = 'create-test-data!segments.js';
const manifestsKey = 'create-test-data!manifests.js';
module.exports = function() {
return {
name: 'createTestData',
buildStart() {
this.addWatchFile(segmentsDir);
this.addWatchFile(manifestsDir);
[].concat(getSegments())
.concat(getManifests())
.forEach((file) => this.addWatchFile(file));
},
resolveId(importee, importer) {
// if this is not an id we can resolve return
if (importee.indexOf('create-test-data!') !== 0) {
return;
}
const name = importee.split('!')[1];
return (name.indexOf('segments') === 0) ? segmentsKey : manifestsKey;
},
load(id) {
if (id === segmentsKey) {
return buildSegmentString.call(this);
}
if (id === manifestsKey) {
return buildManifestString.call(this);
}
}
};
};

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,380 @@
{
"mediaGroups": {
"AUDIO": {
"audio": {
"default": {
"autoselect": true,
"default": true,
"language": "",
"uri": "combined-audio-playlists",
"playlists": [
{
"attributes": {},
"segments": [
{
"duration": 4.011,
"uri": "a-eng-0128k-aac-2c-s1.mp4",
"timeline": 0,
"map": {
"uri": "a-eng-0128k-aac-2c-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-s1.mp4",
"number": 0
},
{
"duration": 3.989,
"uri": "a-eng-0128k-aac-2c-s2.mp4",
"timeline": 0,
"map": {
"uri": "a-eng-0128k-aac-2c-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-s2.mp4",
"number": 1
},
{
"duration": 4.011,
"uri": "a-eng-0128k-aac-2c-s3.mp4",
"timeline": 0,
"map": {
"uri": "a-eng-0128k-aac-2c-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-s3.mp4",
"number": 2
},
{
"duration": 3.989,
"uri": "a-eng-0128k-aac-2c-s4.mp4",
"timeline": 0,
"map": {
"uri": "a-eng-0128k-aac-2c-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-s4.mp4",
"number": 3
},
{
"duration": 4.011,
"uri": "a-eng-0128k-aac-2c-s5.mp4",
"timeline": 0,
"map": {
"uri": "a-eng-0128k-aac-2c-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-s5.mp4",
"number": 4
},
{
"duration": 3.989,
"uri": "a-eng-0128k-aac-2c-s6.mp4",
"timeline": 0,
"map": {
"uri": "a-eng-0128k-aac-2c-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-s6.mp4",
"number": 5
},
{
"duration": 4.011,
"uri": "a-eng-0128k-aac-2c-s7.mp4",
"timeline": 0,
"map": {
"uri": "a-eng-0128k-aac-2c-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-s7.mp4",
"number": 6
},
{
"duration": 3.989,
"uri": "a-eng-0128k-aac-2c-s8.mp4",
"timeline": 0,
"map": {
"uri": "a-eng-0128k-aac-2c-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-s8.mp4",
"number": 7
},
{
"duration": 4.011,
"uri": "a-eng-0128k-aac-2c-s9.mp4",
"timeline": 0,
"map": {
"uri": "a-eng-0128k-aac-2c-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-s9.mp4",
"number": 8
},
{
"duration": 3.989,
"uri": "a-eng-0128k-aac-2c-s10.mp4",
"timeline": 0,
"map": {
"uri": "a-eng-0128k-aac-2c-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-s10.mp4",
"number": 9
},
{
"duration": 4.011,
"uri": "a-eng-0128k-aac-2c-s11.mp4",
"timeline": 0,
"map": {
"uri": "a-eng-0128k-aac-2c-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-s11.mp4",
"number": 10
},
{
"duration": 3.989,
"uri": "a-eng-0128k-aac-2c-s12.mp4",
"timeline": 0,
"map": {
"uri": "a-eng-0128k-aac-2c-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-s12.mp4",
"number": 11
},
{
"duration": 4.011,
"uri": "a-eng-0128k-aac-2c-s13.mp4",
"timeline": 0,
"map": {
"uri": "a-eng-0128k-aac-2c-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-s13.mp4",
"number": 12
},
{
"duration": 3.989,
"uri": "a-eng-0128k-aac-2c-s14.mp4",
"timeline": 0,
"map": {
"uri": "a-eng-0128k-aac-2c-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-s14.mp4",
"number": 13
},
{
"duration": 4,
"uri": "a-eng-0128k-aac-2c-s15.mp4",
"timeline": 0,
"map": {
"uri": "a-eng-0128k-aac-2c-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/a-eng-0128k-aac-2c-s15.mp4",
"number": 14
}
],
"uri": "combined-playlist-audio",
"resolvedUri": "combined-playlist-audio",
"playlistType": "VOD",
"endList": true,
"mediaSequence": 0,
"discontinuitySequence": 0,
"discontinuityStarts": []
}
]
}
}
},
"VIDEO": {},
"CLOSED-CAPTIONS": {},
"SUBTITLES": {}
},
"uri": "http://localhost:10000/",
"playlists": [
{
"attributes": {
"BANDWIDTH": 8062646,
"CODECS": "avc1.4d401f,mp4a.40.2",
"AUDIO": "audio"
},
"segments": [
{
"duration": 4,
"uri": "v-0576p-1400k-libx264-s1.mp4",
"timeline": 0,
"map": {
"uri": "v-0576p-1400k-libx264-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-s1.mp4",
"number": 0
},
{
"duration": 4,
"uri": "v-0576p-1400k-libx264-s2.mp4",
"timeline": 0,
"map": {
"uri": "v-0576p-1400k-libx264-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-s2.mp4",
"number": 1
},
{
"duration": 4,
"uri": "v-0576p-1400k-libx264-s3.mp4",
"timeline": 0,
"map": {
"uri": "v-0576p-1400k-libx264-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-s3.mp4",
"number": 2
},
{
"duration": 4,
"uri": "v-0576p-1400k-libx264-s4.mp4",
"timeline": 0,
"map": {
"uri": "v-0576p-1400k-libx264-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-s4.mp4",
"number": 3
},
{
"duration": 4,
"uri": "v-0576p-1400k-libx264-s5.mp4",
"timeline": 0,
"map": {
"uri": "v-0576p-1400k-libx264-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-s5.mp4",
"number": 4
},
{
"duration": 4,
"uri": "v-0576p-1400k-libx264-s6.mp4",
"timeline": 0,
"map": {
"uri": "v-0576p-1400k-libx264-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-s6.mp4",
"number": 5
},
{
"duration": 4,
"uri": "v-0576p-1400k-libx264-s7.mp4",
"timeline": 0,
"map": {
"uri": "v-0576p-1400k-libx264-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-s7.mp4",
"number": 6
},
{
"duration": 4,
"uri": "v-0576p-1400k-libx264-s8.mp4",
"timeline": 0,
"map": {
"uri": "v-0576p-1400k-libx264-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-s8.mp4",
"number": 7
},
{
"duration": 4,
"uri": "v-0576p-1400k-libx264-s9.mp4",
"timeline": 0,
"map": {
"uri": "v-0576p-1400k-libx264-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-s9.mp4",
"number": 8
},
{
"duration": 4,
"uri": "v-0576p-1400k-libx264-s10.mp4",
"timeline": 0,
"map": {
"uri": "v-0576p-1400k-libx264-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-s10.mp4",
"number": 9
},
{
"duration": 4,
"uri": "v-0576p-1400k-libx264-s11.mp4",
"timeline": 0,
"map": {
"uri": "v-0576p-1400k-libx264-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-s11.mp4",
"number": 10
},
{
"duration": 4,
"uri": "v-0576p-1400k-libx264-s12.mp4",
"timeline": 0,
"map": {
"uri": "v-0576p-1400k-libx264-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-s12.mp4",
"number": 11
},
{
"duration": 4,
"uri": "v-0576p-1400k-libx264-s13.mp4",
"timeline": 0,
"map": {
"uri": "v-0576p-1400k-libx264-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-s13.mp4",
"number": 12
},
{
"duration": 4,
"uri": "v-0576p-1400k-libx264-s14.mp4",
"timeline": 0,
"map": {
"uri": "v-0576p-1400k-libx264-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-s14.mp4",
"number": 13
},
{
"duration": 4,
"uri": "v-0576p-1400k-libx264-s15.mp4",
"timeline": 0,
"map": {
"uri": "v-0576p-1400k-libx264-init.mp4",
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-init.mp4"
},
"resolvedUri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/v-0576p-1400k-libx264-s15.mp4",
"number": 14
}
],
"uri": "combined-playlist",
"resolvedUri": "combined-playlist",
"playlistType": "VOD",
"endList": true,
"mediaSequence": 0,
"discontinuitySequence": 0,
"discontinuityStarts": []
}
]
}

View File

@ -0,0 +1,850 @@
/* global window document */
/* eslint-disable vars-on-top, no-var, object-shorthand, no-console */
(function(window) {
var representationsEl = document.getElementById('representations');
representationsEl.addEventListener('change', function() {
var selectedIndex = representationsEl.selectedIndex;
if (selectedIndex < 0 || !window.vhs) {
return;
}
var selectedOption = representationsEl.options[representationsEl.selectedIndex];
if (!selectedOption) {
return;
}
var id = selectedOption.value;
window.vhs.representations().forEach(function(rep) {
rep.playlist.disabled = rep.id !== id;
});
window.pc.fastQualityChange_(selectedOption);
});
var isManifestObjectType = function(url) {
return (/application\/vnd\.videojs\.vhs\+json/).test(url);
};
var hlsOptGroup = document.querySelector('[label="hls"]');
var dashOptGroup = document.querySelector('[label="dash"]');
var drmOptGroup = document.querySelector('[label="drm"]');
var liveOptGroup = document.querySelector('[label="live"]');
var llliveOptGroup = document.querySelector('[label="low latency live"]');
var manifestOptGroup = document.querySelector('[label="json manifest object"]');
var sourceList;
var hlsDataManifest;
var dashDataManifest;
var addSourcesToDom = function() {
if (!sourceList || !hlsDataManifest || !dashDataManifest) {
return;
}
sourceList.forEach(function(source) {
var option = document.createElement('option');
option.innerText = source.name;
option.value = source.uri;
if (source.keySystems) {
option.setAttribute('data-key-systems', JSON.stringify(source.keySystems, null, 2));
}
if (source.mimetype) {
option.setAttribute('data-mimetype', source.mimetype);
}
if (source.features.indexOf('low-latency') !== -1) {
llliveOptGroup.appendChild(option);
} else if (source.features.indexOf('live') !== -1) {
liveOptGroup.appendChild(option);
} else if (source.keySystems) {
drmOptGroup.appendChild(option);
} else if (source.mimetype === 'application/x-mpegurl') {
hlsOptGroup.appendChild(option);
} else if (source.mimetype === 'application/dash+xml') {
dashOptGroup.appendChild(option);
}
});
var hlsOption = document.createElement('option');
var dashOption = document.createElement('option');
dashOption.innerText = 'Dash Manifest Object Test, does not survive page reload';
dashOption.value = `data:application/vnd.videojs.vhs+json,${dashDataManifest}`;
hlsOption.innerText = 'HLS Manifest Object Test, does not survive page reload';
hlsOption.value = `data:application/vnd.videojs.vhs+json,${hlsDataManifest}`;
manifestOptGroup.appendChild(hlsOption);
manifestOptGroup.appendChild(dashOption);
};
var sourcesXhr = new window.XMLHttpRequest();
sourcesXhr.addEventListener('load', function() {
sourceList = JSON.parse(sourcesXhr.responseText);
addSourcesToDom();
});
sourcesXhr.open('GET', './scripts/sources.json');
sourcesXhr.send();
var hlsManifestXhr = new window.XMLHttpRequest();
hlsManifestXhr.addEventListener('load', function() {
hlsDataManifest = hlsManifestXhr.responseText;
addSourcesToDom();
});
hlsManifestXhr.open('GET', './scripts/hls-manifest-object.json');
hlsManifestXhr.send();
var dashManifestXhr = new window.XMLHttpRequest();
dashManifestXhr.addEventListener('load', function() {
dashDataManifest = dashManifestXhr.responseText;
addSourcesToDom();
});
dashManifestXhr.open('GET', './scripts/dash-manifest-object.json');
dashManifestXhr.send();
// all relevant elements
var urlButton = document.getElementById('load-url');
var sources = document.getElementById('load-source');
var stateEls = {};
var getInputValue = function(el) {
if (el.type === 'url' || el.type === 'text' || el.nodeName.toLowerCase() === 'textarea') {
if (isManifestObjectType(el.value)) {
return '';
}
return encodeURIComponent(el.value);
} else if (el.type === 'select-one') {
return el.options[el.selectedIndex].value;
} else if (el.type === 'checkbox') {
return el.checked;
}
console.warn('unhandled input type ' + el.type);
return '';
};
var setInputValue = function(el, value) {
if (el.type === 'url' || el.type === 'text' || el.nodeName.toLowerCase() === 'textarea') {
el.value = decodeURIComponent(value);
} else if (el.type === 'select-one') {
for (var i = 0; i < el.options.length; i++) {
if (el.options[i].value === value) {
el.options[i].selected = true;
}
}
} else {
// get the `value` into a Boolean.
el.checked = JSON.parse(value);
}
};
var newEvent = function(name) {
var event;
if (typeof window.Event === 'function') {
event = new window.Event(name);
} else {
event = document.createEvent('Event');
event.initEvent(name, true, true);
}
return event;
};
// taken from video.js
var getFileExtension = function(path) {
var splitPathRe;
var pathParts;
if (typeof path === 'string') {
splitPathRe = /^(\/?)([\s\S]*?)((?:\.{1,2}|[^\/]*?)(\.([^\.\/\?]+)))(?:[\/]*|[\?].*)$/i;
pathParts = splitPathRe.exec(path);
if (pathParts) {
return pathParts.pop().toLowerCase();
}
}
return '';
};
var saveState = function() {
var query = '';
if (!window.history.replaceState) {
return;
}
Object.keys(stateEls).forEach(function(elName) {
var symbol = query.length ? '&' : '?';
query += symbol + elName + '=' + getInputValue(stateEls[elName]);
});
window.history.replaceState({}, 'vhs demo', query);
};
window.URLSearchParams = window.URLSearchParams || function(locationSearch) {
this.get = function(name) {
var results = new RegExp('[\?&]' + name + '=([^&#]*)').exec(locationSearch);
return results ? decodeURIComponent(results[1]) : null;
};
};
// eslint-disable-next-line
var loadState = function() {
var params = new window.URLSearchParams(window.location.search);
return Object.keys(stateEls).reduce(function(acc, elName) {
acc[elName] = typeof params.get(elName) !== 'object' ? params.get(elName) : getInputValue(stateEls[elName]);
return acc;
}, {});
};
// eslint-disable-next-line
var reloadScripts = function(urls, cb) {
var el = document.getElementById('reload-scripts');
if (!el) {
el = document.createElement('div');
el.id = 'reload-scripts';
document.body.appendChild(el);
}
while (el.firstChild) {
el.removeChild(el.firstChild);
}
var loaded = [];
var checkDone = function() {
if (loaded.length === urls.length) {
cb();
}
};
urls.forEach(function(url) {
var script = document.createElement('script');
// scripts marked as defer will be loaded asynchronously but will be executed in the order they are in the DOM
script.defer = true;
// dynamically created scripts are async by default unless otherwise specified
// async scripts are loaded asynchronously but also executed as soon as they are loaded
// we want to load them in the order they are added therefore we want to turn off async
script.async = false;
script.src = url;
script.onload = function() {
loaded.push(url);
checkDone();
};
el.appendChild(script);
});
};
var regenerateRepresentations = function() {
while (representationsEl.firstChild) {
representationsEl.removeChild(representationsEl.firstChild);
}
var selectedIndex;
window.vhs.representations().forEach(function(rep, i) {
var option = document.createElement('option');
option.value = rep.id;
option.innerText = JSON.stringify({
id: rep.id,
videoCodec: rep.codecs.video,
audioCodec: rep.codecs.audio,
bandwidth: rep.bandwidth,
heigth: rep.heigth,
width: rep.width
});
if (window.pc.media().id === rep.id) {
selectedIndex = i;
}
representationsEl.appendChild(option);
});
representationsEl.selectedIndex = selectedIndex;
};
function getBuffered(buffered) {
var bufferedText = '';
if (!buffered) {
return bufferedText;
}
if (buffered.length) {
bufferedText += buffered.start(0) + ' - ' + buffered.end(0);
}
for (var i = 1; i < buffered.length; i++) {
bufferedText += ', ' + buffered.start(i) + ' - ' + buffered.end(i);
}
return bufferedText;
}
var setupSegmentMetadata = function(player) {
// setup segment metadata
var segmentMetadata = document.querySelector('#segment-metadata');
player.one('loadedmetadata', function() {
var tracks = player.textTracks();
var segmentMetadataTrack;
for (var i = 0; i < tracks.length; i++) {
if (tracks[i].label === 'segment-metadata') {
segmentMetadataTrack = tracks[i];
}
}
while (segmentMetadata.children.length) {
segmentMetadata.removeChild(segmentMetadata.firstChild);
}
if (segmentMetadataTrack) {
segmentMetadataTrack.addEventListener('cuechange', function() {
var cues = segmentMetadataTrack.activeCues || [];
while (segmentMetadata.children.length) {
segmentMetadata.removeChild(segmentMetadata.firstChild);
}
for (var j = 0; j < cues.length; j++) {
var text = JSON.stringify(JSON.parse(cues[j].text), null, 2);
var li = document.createElement('li');
var pre = document.createElement('pre');
pre.classList.add('border', 'rounded', 'p-2');
pre.textContent = text;
li.appendChild(pre);
segmentMetadata.appendChild(li);
}
});
}
});
};
var setupPlayerStats = function(player) {
player.on('dispose', () => {
if (window.statsTimer) {
window.clearInterval(window.statsTimer);
window.statsTimer = null;
}
});
var currentTimeStat = document.querySelector('.current-time-stat');
var bufferedStat = document.querySelector('.buffered-stat');
var videoBufferedStat = document.querySelector('.video-buffered-stat');
var audioBufferedStat = document.querySelector('.audio-buffered-stat');
var seekableStartStat = document.querySelector('.seekable-start-stat');
var seekableEndStat = document.querySelector('.seekable-end-stat');
var videoBitrateState = document.querySelector('.video-bitrate-stat');
var measuredBitrateStat = document.querySelector('.measured-bitrate-stat');
var videoTimestampOffset = document.querySelector('.video-timestampoffset');
var audioTimestampOffset = document.querySelector('.audio-timestampoffset');
player.on('timeupdate', function() {
currentTimeStat.textContent = player.currentTime().toFixed(1);
});
window.statsTimer = window.setInterval(function() {
var oldStart;
var oldEnd;
var seekable = player.seekable();
if (seekable && seekable.length) {
oldStart = seekableStartStat.textContent;
if (seekable.start(0).toFixed(1) !== oldStart) {
seekableStartStat.textContent = seekable.start(0).toFixed(1);
}
oldEnd = seekableEndStat.textContent;
if (seekable.end(0).toFixed(1) !== oldEnd) {
seekableEndStat.textContent = seekable.end(0).toFixed(1);
}
}
// buffered
bufferedStat.textContent = getBuffered(player.buffered());
// exit early if no VHS
if (!player.tech(true).vhs) {
videoBufferedStat.textContent = '';
audioBufferedStat.textContent = '';
videoBitrateState.textContent = '';
measuredBitrateStat.textContent = '';
videoTimestampOffset.textContent = '';
audioTimestampOffset.textContent = '';
return;
}
videoBufferedStat.textContent = getBuffered(player.tech(true).vhs.playlistController_.mainSegmentLoader_.sourceUpdater_.videoBuffer &&
player.tech(true).vhs.playlistController_.mainSegmentLoader_.sourceUpdater_.videoBuffer.buffered);
// demuxed audio
var audioBuffer = getBuffered(player.tech(true).vhs.playlistController_.audioSegmentLoader_.sourceUpdater_.audioBuffer &&
player.tech(true).vhs.playlistController_.audioSegmentLoader_.sourceUpdater_.audioBuffer.buffered);
// muxed audio
if (!audioBuffer) {
audioBuffer = getBuffered(player.tech(true).vhs.playlistController_.mainSegmentLoader_.sourceUpdater_.audioBuffer &&
player.tech(true).vhs.playlistController_.mainSegmentLoader_.sourceUpdater_.audioBuffer.buffered);
}
audioBufferedStat.textContent = audioBuffer;
if (player.tech(true).vhs.playlistController_.audioSegmentLoader_.sourceUpdater_.audioBuffer) {
audioTimestampOffset.textContent = player.tech(true).vhs.playlistController_.audioSegmentLoader_.sourceUpdater_.audioBuffer.timestampOffset;
} else if (player.tech(true).vhs.playlistController_.mainSegmentLoader_.sourceUpdater_.audioBuffer) {
audioTimestampOffset.textContent = player.tech(true).vhs.playlistController_.mainSegmentLoader_.sourceUpdater_.audioBuffer.timestampOffset;
}
if (player.tech(true).vhs.playlistController_.mainSegmentLoader_.sourceUpdater_.videoBuffer) {
videoTimestampOffset.textContent = player.tech(true).vhs.playlistController_.mainSegmentLoader_.sourceUpdater_.videoBuffer.timestampOffset;
}
// bitrates
var playlist = player.tech_.vhs.playlists.media();
if (playlist && playlist.attributes && playlist.attributes.BANDWIDTH) {
videoBitrateState.textContent = (playlist.attributes.BANDWIDTH / 1024).toLocaleString(undefined, {
maximumFractionDigits: 1
}) + ' kbps';
}
if (player.tech_.vhs.bandwidth) {
measuredBitrateStat.textContent = (player.tech_.vhs.bandwidth / 1024).toLocaleString(undefined, {
maximumFractionDigits: 1
}) + ' kbps';
}
}, 100);
};
var setupContentSteeringData = function(player) {
var currentPathwayEl = document.querySelector('.current-pathway');
var availablePathwaysEl = document.querySelector('.available-pathways');
var steeringManifestEl = document.querySelector('.steering-manifest');
player.one('loadedmetadata', function() {
var steeringController = player.tech_.vhs && player.tech_.vhs.playlistController_.contentSteeringController_;
if (!steeringController) {
return;
}
var onContentSteering = function() {
currentPathwayEl.textContent = steeringController.currentPathway;
availablePathwaysEl.textContent = Array.from(steeringController.availablePathways_).join(', ');
steeringManifestEl.textContent = JSON.stringify(steeringController.steeringManifest);
};
steeringController.on('content-steering', onContentSteering);
});
};
[
'debug',
'autoplay',
'muted',
'fluid',
'minified',
'sync-workers',
'liveui',
'llhls',
'url',
'type',
'keysystems',
'buffer-water',
'exact-manifest-timings',
'pixel-diff-selector',
'network-info',
'dts-offset',
'override-native',
'object-fit',
'use-mms',
'preload',
'mirror-source',
'forced-subtitles',
'native-text-tracks'
].forEach(function(name) {
stateEls[name] = document.getElementById(name);
});
window.startDemo = function(cb) {
var state = loadState();
Object.keys(state).forEach(function(elName) {
setInputValue(stateEls[elName], state[elName]);
});
Array.prototype.forEach.call(sources.options, function(s, i) {
if (s.value === state.url) {
sources.selectedIndex = i;
}
});
stateEls.fluid.addEventListener('change', function(event) {
saveState();
if (event.target.checked) {
window['player-fixture'].style.aspectRatio = '16/9';
window['player-fixture'].style.minHeight = 'initial';
} else {
window['player-fixture'].style.aspectRatio = '';
window['player-fixture'].style.minHeight = '250px';
}
window.player.fluid(event.target.checked);
});
stateEls.muted.addEventListener('change', function(event) {
saveState();
window.player.muted(event.target.checked);
});
stateEls.autoplay.addEventListener('change', function(event) {
saveState();
window.player.autoplay(event.target.checked);
});
// stateEls that reload the player and scripts
[
'mirror-source',
'sync-workers',
'preload',
'llhls',
'buffer-water',
'override-native',
'object-fit',
'use-mms',
'liveui',
'pixel-diff-selector',
'network-info',
'dts-offset',
'exact-manifest-timings',
'forced-subtitles',
'native-text-tracks'
].forEach(function(name) {
stateEls[name].addEventListener('change', function(event) {
saveState();
stateEls.minified.dispatchEvent(newEvent('change'));
});
});
[
'llhls'
].forEach(function(name) {
stateEls[name].checked = true;
});
[
'exact-manifest-timings',
'pixel-diff-selector',
'buffer-water'
].forEach(function(name) {
stateEls[name].checked = false;
});
stateEls.debug.addEventListener('change', function(event) {
saveState();
window.videojs.log.level(event.target.checked ? 'debug' : 'info');
});
stateEls.minified.addEventListener('change', function(event) {
var urls = [
'node_modules/video.js/dist/alt/video.core',
'node_modules/videojs-contrib-eme/dist/videojs-contrib-eme',
'node_modules/videojs-contrib-quality-levels/dist/videojs-contrib-quality-levels',
'node_modules/videojs-contrib-quality-menu/dist/videojs-contrib-quality-menu'
].map(function(url) {
return url + (event.target.checked ? '.min' : '') + '.js';
});
if (stateEls['sync-workers'].checked) {
urls.push('dist/videojs-http-streaming-sync-workers.js');
} else {
urls.push('dist/videojs-http-streaming' + (event.target.checked ? '.min' : '') + '.js');
}
saveState();
if (window.player) {
window.player.dispose();
delete window.player;
}
if (window.videojs) {
delete window.videojs;
}
reloadScripts(urls, function() {
var player;
var fixture = document.getElementById('player-fixture');
var videoEl = document.createElement('video-js');
videoEl.setAttribute('controls', '');
videoEl.setAttribute('playsInline', '');
videoEl.setAttribute('preload', stateEls.preload.options[stateEls.preload.selectedIndex].value || 'auto');
videoEl.className = 'vjs-default-skin';
fixture.appendChild(videoEl);
var mirrorSource = getInputValue(stateEls['mirror-source']);
player = window.player = window.videojs(videoEl, {
plugins: {
qualityMenu: {}
},
liveui: stateEls.liveui.checked,
enableSourceset: mirrorSource,
html5: {
nativeTextTracks: getInputValue(stateEls['native-text-tracks']),
vhs: {
overrideNative: getInputValue(stateEls['override-native']),
experimentalUseMMS: getInputValue(stateEls['use-mms']),
bufferBasedABR: getInputValue(stateEls['buffer-water']),
llhls: getInputValue(stateEls.llhls),
exactManifestTimings: getInputValue(stateEls['exact-manifest-timings']),
leastPixelDiffSelector: getInputValue(stateEls['pixel-diff-selector']),
useNetworkInformationApi: getInputValue(stateEls['network-info']),
useDtsForTimestampOffset: getInputValue(stateEls['dts-offset']),
useForcedSubtitles: getInputValue(stateEls['forced-subtitles']),
usePlayerObjectFit: getInputValue(stateEls['object-fit'])
}
}
});
setupPlayerStats(player);
setupSegmentMetadata(player);
setupContentSteeringData(player);
// save player muted state interation
player.on('volumechange', function() {
if (stateEls.muted.checked !== player.muted()) {
stateEls.muted.checked = player.muted();
saveState();
}
});
player.on('sourceset', function() {
var source = player.currentSource();
if (source.keySystems) {
var copy = JSON.parse(JSON.stringify(source.keySystems));
// have to delete pssh as it will often make keySystems too big
// for a uri
Object.keys(copy).forEach(function(key) {
if (copy[key].hasOwnProperty('pssh')) {
delete copy[key].pssh;
}
});
stateEls.keysystems.value = JSON.stringify(copy, null, 2);
}
if (source.src) {
stateEls.url.value = encodeURI(source.src);
}
if (source.type) {
stateEls.type.value = source.type;
}
saveState();
});
player.width(640);
player.height(264);
// configure videojs-contrib-eme
player.eme();
stateEls.debug.dispatchEvent(newEvent('change'));
stateEls.muted.dispatchEvent(newEvent('change'));
stateEls.fluid.dispatchEvent(newEvent('change'));
stateEls.autoplay.dispatchEvent(newEvent('change'));
// run the load url handler for the intial source
if (stateEls.url.value) {
urlButton.dispatchEvent(newEvent('click'));
} else {
sources.dispatchEvent(newEvent('change'));
}
player.on('loadedmetadata', function() {
if (player.tech_.vhs) {
window.vhs = player.tech_.vhs;
window.pc = player.tech_.vhs.playlistController_;
window.pc.mainPlaylistLoader_.on('mediachange', regenerateRepresentations);
regenerateRepresentations();
} else {
window.vhs = null;
window.pc = null;
}
});
cb(player);
});
});
var urlButtonClick = function(event) {
var ext;
var type = stateEls.type.value;
// reset type if it's a manifest object's type
if (type === 'application/vnd.videojs.vhs+json') {
type = '';
}
if (isManifestObjectType(stateEls.url.value)) {
type = 'application/vnd.videojs.vhs+json';
}
if (!type.trim()) {
ext = getFileExtension(stateEls.url.value);
if (ext === 'mpd') {
type = 'application/dash+xml';
} else if (ext === 'm3u8') {
type = 'application/x-mpegURL';
}
}
saveState();
var source = {
src: stateEls.url.value,
type: type
};
if (stateEls.keysystems.value) {
source.keySystems = JSON.parse(stateEls.keysystems.value);
}
sources.selectedIndex = -1;
Array.prototype.forEach.call(sources.options, function(s, i) {
if (s.value === stateEls.url.value) {
sources.selectedIndex = i;
}
});
window.player.src(source);
};
urlButton.addEventListener('click', urlButtonClick);
urlButton.addEventListener('tap', urlButtonClick);
sources.addEventListener('change', function(event) {
var selectedOption = sources.options[sources.selectedIndex];
if (!selectedOption) {
return;
}
var src = selectedOption.value;
stateEls.url.value = src;
stateEls.type.value = selectedOption.getAttribute('data-mimetype');
stateEls.keysystems.value = selectedOption.getAttribute('data-key-systems');
urlButton.dispatchEvent(newEvent('click'));
});
stateEls.url.addEventListener('keyup', function(event) {
if (event.key === 'Enter') {
urlButton.click();
}
});
stateEls.url.addEventListener('input', function(event) {
if (stateEls.type.value.length) {
stateEls.type.value = '';
}
});
stateEls.type.addEventListener('keyup', function(event) {
if (event.key === 'Enter') {
urlButton.click();
}
});
// run the change handler for the first time
stateEls.minified.dispatchEvent(newEvent('change'));
// Setup the download / copy log buttons
const downloadLogsButton = document.getElementById('download-logs');
const copyLogsButton = document.getElementById('copy-logs');
/**
* Window location and history joined with line breaks, stringifying any objects
*
* @return {string} Stringified history
*/
const stringifiedLogHistory = () => {
const player = document.querySelector('video-js').player;
const logs = [].concat(player.log.history());
const withVhs = !!player.tech(true).vhs;
return [
window.location.href,
window.navigator.userAgent,
`Video.js ${window.videojs.VERSION}`,
`Using VHS: ${withVhs}`,
withVhs ? JSON.stringify(player.tech(true).vhs.version()) : ''
].concat(logs.map(entryArgs => {
return entryArgs.map(item => {
return typeof item === 'object' ? JSON.stringify(item) : item;
});
})).join('\n');
};
/**
* Turn a bootstrap button class on briefly then revert to btn-outline-ptimary
*
* @param {HTMLElement} el Element to add class to
* @param {string} stateClass Bootstrap button class suffix
*/
const doneFeedback = (el, stateClass) => {
el.classList.add(`btn-${stateClass}`);
el.classList.remove('btn-outline-secondary');
window.setTimeout(() => {
el.classList.add('btn-outline-secondary');
el.classList.remove(`btn-${stateClass}`);
}, 1500);
};
downloadLogsButton.addEventListener('click', function() {
const logHistory = stringifiedLogHistory();
const a = document.createElement('a');
const href = URL.createObjectURL(new Blob([logHistory], { type: 'text/plain' }));
a.setAttribute('download', 'vhs-player-logs.txt');
a.setAttribute('target', '_blank');
a.href = href;
a.click();
a.remove();
URL.revokeObjectURL(href);
doneFeedback(downloadLogsButton, 'success');
});
copyLogsButton.addEventListener('click', function() {
const logHistory = stringifiedLogHistory();
window.navigator.clipboard.writeText(logHistory).then(z => {
doneFeedback(copyLogsButton, 'success');
}).catch(e => {
doneFeedback(copyLogsButton, 'danger');
console.log('Copy failed', e);
});
});
};
}(window));

View File

@ -0,0 +1,41 @@
const generate = require('videojs-generate-karma-config');
const CI_TEST_TYPE = process.env.CI_TEST_TYPE || '';
module.exports = function(config) {
// see https://github.com/videojs/videojs-generate-karma-config
// for options
const options = {
coverage: CI_TEST_TYPE === 'coverage' ? true : false,
preferHeadless: false,
browsers(aboutToRun) {
return aboutToRun.filter(function(launcherName) {
return !(/(Safari|Chromium)/).test(launcherName);
});
},
files(defaults) {
defaults.splice(
defaults.indexOf('node_modules/video.js/dist/video.js'),
1,
'node_modules/video.js/dist/alt/video.core.js'
);
return defaults;
},
browserstackLaunchers(defaults) {
// do not run on browserstack for coverage
if (CI_TEST_TYPE === 'coverage') {
defaults = {};
}
return defaults;
},
serverBrowsers() {
return [];
}
};
config = generate(config, options);
// any other custom stuff not supported by options here!
};

View File

@ -0,0 +1,41 @@
const path = require('path');
const sh = require('shelljs');
const deployDir = 'deploy';
const files = [
'node_modules/video.js/dist/video-js.css',
'node_modules/video.js/dist/alt/video.core.js',
'node_modules/video.js/dist/alt/video.core.min.js',
'node_modules/videojs-contrib-eme/dist/videojs-contrib-eme.js',
'node_modules/videojs-contrib-eme/dist/videojs-contrib-eme.min.js',
'node_modules/videojs-contrib-quality-levels/dist/videojs-contrib-quality-levels.js',
'node_modules/videojs-contrib-quality-levels/dist/videojs-contrib-quality-levels.min.js',
'node_modules/videojs-contrib-quality-menu/dist/videojs-contrib-quality-menu.css',
'node_modules/videojs-contrib-quality-menu/dist/videojs-contrib-quality-menu.js',
'node_modules/videojs-contrib-quality-menu/dist/videojs-contrib-quality-menu.min.js',
'node_modules/bootstrap/dist/js/bootstrap.js',
'node_modules/bootstrap/dist/css/bootstrap.css',
'node_modules/d3/d3.min.js',
'logo.svg',
'scripts/sources.json',
'scripts/index.js',
'scripts/old-index.js',
'scripts/dash-manifest-object.json',
'scripts/hls-manifest-object.json',
'test/dist/bundle.js'
];
// cleanup previous deploy
sh.rm('-rf', deployDir);
// make sure the directory exists
sh.mkdir('-p', deployDir);
// create nested directories
files
.map((file) => path.dirname(file))
.forEach((dir) => sh.mkdir('-p', path.join(deployDir, dir)));
// copy files/folders to deploy dir
files
.concat('dist', 'index.html', 'old-index.html', 'utils')
.forEach((file) => sh.cp('-r', file, path.join(deployDir, file)));

View File

@ -0,0 +1,135 @@
const generate = require('videojs-generate-rollup-config');
const worker = require('rollup-plugin-worker-factory');
const {terser} = require('rollup-plugin-terser');
const createTestData = require('./create-test-data.js');
const replace = require('@rollup/plugin-replace');
const strip = require('@rollup/plugin-strip');
const CI_TEST_TYPE = process.env.CI_TEST_TYPE || '';
let syncWorker;
// see https://github.com/videojs/videojs-generate-rollup-config
// for options
const options = {
input: 'src/videojs-http-streaming.js',
distName: 'videojs-http-streaming',
excludeCoverage(defaults) {
defaults.push(/^rollup-plugin-worker-factory/);
defaults.push(/^create-test-data!/);
return defaults;
},
globals(defaults) {
defaults.browser['@xmldom/xmldom'] = 'window';
defaults.test['@xmldom/xmldom'] = 'window';
return defaults;
},
externals(defaults) {
return Object.assign(defaults, {
module: defaults.module.concat([
'aes-decrypter',
'm3u8-parser',
'mpd-parser',
'mux.js',
'@videojs/vhs-utils'
])
});
},
plugins(defaults) {
// add worker and createTestData to the front of plugin lists
defaults.module.unshift('worker');
defaults.browser.unshift('worker');
// change this to `syncWorker` for syncronous web worker
// during unit tests
if (CI_TEST_TYPE === 'coverage') {
defaults.test.unshift('syncWorker');
} else {
defaults.test.unshift('worker');
}
defaults.test.unshift('createTestData');
if (CI_TEST_TYPE === 'playback-min') {
defaults.test.push('uglify');
}
// istanbul is only in the list for regular builds and not watch
if (CI_TEST_TYPE !== 'coverage' && defaults.test.indexOf('istanbul') !== -1) {
defaults.test.splice(defaults.test.indexOf('istanbul'), 1);
}
defaults.module.unshift('replace');
defaults.module.unshift('strip');
defaults.browser.unshift('strip');
return defaults;
},
primedPlugins(defaults) {
defaults = Object.assign(defaults, {
replace: replace({
// single quote replace
"require('@videojs/vhs-utils/es": "require('@videojs/vhs-utils/cjs",
// double quote replace
'require("@videojs/vhs-utils/es': 'require("@videojs/vhs-utils/cjs'
}),
uglify: terser({
output: {comments: 'some'},
compress: {passes: 2}
}),
strip: strip({
functions: ['TEST_ONLY_*'],
debugger: false,
sourceMap: false
}),
createTestData: createTestData()
});
defaults.worker = worker({type: 'browser', plugins: [
defaults.resolve,
defaults.json,
defaults.commonjs,
defaults.babel
]});
defaults.syncWorker = syncWorker = worker({type: 'mock', plugins: [
defaults.resolve,
defaults.json,
defaults.commonjs,
defaults.babel
]});
return defaults;
},
babel(defaults) {
const presetEnvSettings = defaults.presets[0][1];
presetEnvSettings.exclude = presetEnvSettings.exclude || [];
presetEnvSettings.exclude.push('@babel/plugin-transform-typeof-symbol');
return defaults;
}
};
if (CI_TEST_TYPE === 'playback' || CI_TEST_TYPE === 'playback-min') {
options.testInput = 'test/playback.test.js';
} else if (CI_TEST_TYPE === 'unit' || CI_TEST_TYPE === 'coverage') {
options.testInput = {include: ['test/**/*.test.js'], exclude: ['test/playback.test.js']};
}
const config = generate(options);
if (config.builds.browser) {
config.builds.syncWorkers = config.makeBuild('browser', {
output: {
name: 'httpStreaming',
format: 'umd',
file: 'dist/videojs-http-streaming-sync-workers.js'
}
});
config.builds.syncWorkers.plugins[0] = syncWorker;
}
// Add additonal builds/customization here!
// export the builds to rollup
export default Object.values(config.builds);

View File

@ -0,0 +1,447 @@
[
{
"name": "Bipbop - Muxed TS with 1 alt Audio, 5 captions",
"uri": "https://d2zihajmogu5jn.cloudfront.net/bipbop-advanced/bipbop_16x9_variant.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "FMP4 and ts both muxed",
"uri": "https://d2zihajmogu5jn.cloudfront.net/ts-fmp4/index.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Advanced Bipbop - ts and captions muxed",
"uri": "https://devstreaming-cdn.apple.com/videos/streaming/examples/img_bipbop_adv_example_ts/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Advanced Bipbop - FMP4 and captions muxed",
"uri": "https://devstreaming-cdn.apple.com/videos/streaming/examples/img_bipbop_adv_example_fmp4/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Advanced Bipbop - FMP4 hevc, demuxed",
"uri": "https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_adv_example_hevc/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Angel One - FMP4 demuxed, many audio/captions",
"uri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/hls.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Parkour - FMP4 demuxed",
"uri": "https://bitdash-a.akamaihd.net/content/MI201109210084_1/m3u8s-fmp4/f08e80da-bf1d-4e3d-8899-f0f6155f6efa.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Song - ts Audio only",
"uri": "https://s3.amazonaws.com/qa.jwplayer.com/~alex/121628/new_master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Coit Tower drone footage - 4 8 second segment",
"uri": "https://d2zihajmogu5jn.cloudfront.net/CoitTower/master_ts_segtimes.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Disney's Oceans trailer - HLSe, ts Encrypted",
"uri": "https://playertest.longtailvideo.com/adaptive/oceans_aes/oceans_aes.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Sintel - ts with audio/subs and a 4k rendtion",
"uri": "https://bitmovin-a.akamaihd.net/content/sintel/hls/playlist.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Boat Ipsum Subs - HLS + subtitles",
"uri": "https://d2zihajmogu5jn.cloudfront.net/hls-webvtt/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Boat Video Only",
"uri": "https://d2zihajmogu5jn.cloudfront.net/video-only/out.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Boat Audio Only",
"uri": "https://d2zihajmogu5jn.cloudfront.net/audio-only/out.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Boat 4K",
"uri": "https://d2zihajmogu5jn.cloudfront.net/4k-hls/out.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Boat Misaligned - 3, 5, 7, second segment playlists",
"uri": "https://d2zihajmogu5jn.cloudfront.net/misaligned-playlists/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "BBB-CMIF: Big Buck Bunny Dark Truths - demuxed, fmp4",
"uri": "https://storage.googleapis.com/shaka-demo-assets/bbb-dark-truths-hls/hls.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Big Buck Bunny - demuxed audio/video, includes 4K, burns in frame, pts, resolution, bitrate values",
"uri": "https://dash.akamaized.net/akamai/bbb_30fps/bbb_30fps.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "Angel One - fmp4, webm, subs (TODO: subs are broken), alternate audio tracks",
"uri": "https://storage.googleapis.com/shaka-demo-assets/angel-one/dash.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "Angel One - Widevine, fmp4, webm, subs, alternate audio tracks",
"uri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-widevine/dash.mpd",
"mimetype": "application/dash+xml",
"features": [],
"keySystems": {
"com.widevine.alpha": "https://cwip-shaka-proxy.appspot.com/no_auth"
}
},
{
"name": "BBB-CMIF: Big Buck Bunny Dark Truths - demuxed, fmp4",
"uri": "https://storage.googleapis.com/shaka-demo-assets/bbb-dark-truths/dash.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "SIDX demuxed, 2 audio",
"uri": "https://dash.akamaized.net/dash264/TestCases/10a/1/iis_forest_short_poem_multi_lang_480p_single_adapt_aaclc_sidx.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "SIDX bipbop-like",
"uri": "https://download.tsi.telecom-paristech.fr/gpac/DASH_CONFORMANCE/TelecomParisTech/mp4-onDemand/mp4-onDemand-mpd-AV.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "Google self-driving car - SIDX",
"uri": "https://yt-dash-mse-test.commondatastorage.googleapis.com/media/car-20120827-manifest.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "Sintel - single rendition",
"uri": "https://d2zihajmogu5jn.cloudfront.net/sintel_dash/sintel_vod.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "HLS - Live - Axinom live stream, may not always be available",
"uri": "https://akamai-axtest.akamaized.net/routes/lapd-v1-acceptance/www_c4/Manifest.m3u8",
"mimetype": "application/x-mpegurl",
"features": ["live"]
},
{
"name": "DASH - Live - Axinom live stream, may not always be available",
"uri": "https://akamai-axtest.akamaized.net/routes/lapd-v1-acceptance/www_c4/Manifest.mpd",
"mimetype": "application/dash+xml",
"features": ["live"]
},
{
"name": "DASH - Live simulated DASH from DASH IF",
"uri": "https://livesim.dashif.org/livesim/mup_30/testpic_2s/Manifest.mpd",
"mimetype": "application/dash+xml",
"features": ["live"]
},
{
"name": "DASH - Shaka Player Source Simulated Live",
"uri": "https://storage.googleapis.com/shaka-live-assets/player-source.mpd",
"mimetype": "application/dash+xml",
"features": ["live"]
},
{
"name": "Apple's LL-HLS test stream",
"uri": "https://ll-hls-test.apple.com/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": ["live", "low-latency"]
},
{
"name": "Apple's LL-HLS test stream, cmaf, fmp4",
"uri": "https://ll-hls-test.apple.com/cmaf/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": ["live", "low-latency"]
},
{
"name": "Mux's LL-HLS test stream",
"uri": "https://stream.mux.com/v69RSHhFelSm4701snP22dYz2jICy4E4FUyk02rW4gxRM.m3u8",
"mimetype": "application/x-mpegurl",
"features": ["live", "low-latency"]
},
{
"name": "Axinom Multi DRM - DASH, 4k, HEVC, Playready, Widevine",
"uri": "https://media.axprod.net/TestVectors/v7-MultiDRM-SingleKey/Manifest.mpd",
"mimetype": "application/dash+xml",
"features": [],
"keySystems": {
"com.microsoft.playready": {
"url": "https://drm-playready-licensing.axtest.net/AcquireLicense",
"licenseHeaders": {
"X-AxDRM-Message": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ2ZXJzaW9uIjoxLCJjb21fa2V5X2lkIjoiYjMzNjRlYjUtNTFmNi00YWUzLThjOTgtMzNjZWQ1ZTMxYzc4IiwibWVzc2FnZSI6eyJ0eXBlIjoiZW50aXRsZW1lbnRfbWVzc2FnZSIsImtleXMiOlt7ImlkIjoiOWViNDA1MGQtZTQ0Yi00ODAyLTkzMmUtMjdkNzUwODNlMjY2IiwiZW5jcnlwdGVkX2tleSI6ImxLM09qSExZVzI0Y3Iya3RSNzRmbnc9PSJ9XX19.4lWwW46k-oWcah8oN18LPj5OLS5ZU-_AQv7fe0JhNjA"
}
},
"com.widevine.alpha": {
"url": "https://drm-widevine-licensing.axtest.net/AcquireLicense",
"licenseHeaders": {
"X-AxDRM-Message": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ2ZXJzaW9uIjoxLCJjb21fa2V5X2lkIjoiYjMzNjRlYjUtNTFmNi00YWUzLThjOTgtMzNjZWQ1ZTMxYzc4IiwibWVzc2FnZSI6eyJ0eXBlIjoiZW50aXRsZW1lbnRfbWVzc2FnZSIsImtleXMiOlt7ImlkIjoiOWViNDA1MGQtZTQ0Yi00ODAyLTkzMmUtMjdkNzUwODNlMjY2IiwiZW5jcnlwdGVkX2tleSI6ImxLM09qSExZVzI0Y3Iya3RSNzRmbnc9PSJ9XX19.4lWwW46k-oWcah8oN18LPj5OLS5ZU-_AQv7fe0JhNjA"
}
}
}
},
{
"name": "Axinom Multi DRM, Multi Period - DASH, 4k, HEVC, Playready, Widevine",
"uri": "https://media.axprod.net/TestVectors/v7-MultiDRM-MultiKey-MultiPeriod/Manifest.mpd",
"mimetype": "application/dash+xml",
"features": [],
"keySystems": {
"com.microsoft.playready": {
"url": "https://drm-playready-licensing.axtest.net/AcquireLicense",
"licenseHeaders": {
"X-AxDRM-Message": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ2ZXJzaW9uIjoxLCJjb21fa2V5X2lkIjoiYjMzNjRlYjUtNTFmNi00YWUzLThjOTgtMzNjZWQ1ZTMxYzc4IiwibWVzc2FnZSI6eyJ0eXBlIjoiZW50aXRsZW1lbnRfbWVzc2FnZSIsImtleXMiOlt7ImlkIjoiMDg3Mjc4NmUtZjllNy00NjVmLWEzYTItNGU1YjBlZjhmYTQ1IiwiZW5jcnlwdGVkX2tleSI6IlB3NitlRVlOY3ZqWWJmc2gzWDNmbWc9PSJ9LHsiaWQiOiJjMTRmMDcwOS1mMmI5LTQ0MjctOTE2Yi02MWI1MjU4NjUwNmEiLCJlbmNyeXB0ZWRfa2V5IjoiLzErZk5paDM4bXFSdjR5Y1l6bnQvdz09In0seyJpZCI6IjhiMDI5ZTUxLWQ1NmEtNDRiZC05MTBmLWQ0YjVmZDkwZmJhMiIsImVuY3J5cHRlZF9rZXkiOiJrcTBKdVpFanBGTjhzYVRtdDU2ME9nPT0ifSx7ImlkIjoiMmQ2ZTkzODctNjBjYS00MTQ1LWFlYzItYzQwODM3YjRiMDI2IiwiZW5jcnlwdGVkX2tleSI6IlRjUlFlQld4RW9IT0tIcmFkNFNlVlE9PSJ9LHsiaWQiOiJkZTAyZjA3Zi1hMDk4LTRlZTAtYjU1Ni05MDdjMGQxN2ZiYmMiLCJlbmNyeXB0ZWRfa2V5IjoicG9lbmNTN0dnbWVHRmVvSjZQRUFUUT09In0seyJpZCI6IjkxNGU2OWY0LTBhYjMtNDUzNC05ZTlmLTk4NTM2MTVlMjZmNiIsImVuY3J5cHRlZF9rZXkiOiJlaUkvTXNsbHJRNHdDbFJUL0xObUNBPT0ifSx7ImlkIjoiZGE0NDQ1YzItZGI1ZS00OGVmLWIwOTYtM2VmMzQ3YjE2YzdmIiwiZW5jcnlwdGVkX2tleSI6IjJ3K3pkdnFycERWM3hSMGJKeTR1Z3c9PSJ9LHsiaWQiOiIyOWYwNWU4Zi1hMWFlLTQ2ZTQtODBlOS0yMmRjZDQ0Y2Q3YTEiLCJlbmNyeXB0ZWRfa2V5IjoiL3hsU0hweHdxdTNnby9nbHBtU2dhUT09In0seyJpZCI6IjY5ZmU3MDc3LWRhZGQtNGI1NS05NmNkLWMzZWRiMzk5MTg1MyIsImVuY3J5cHRlZF9rZXkiOiJ6dTZpdXpOMnBzaTBaU3hRaUFUa1JRPT0ifV19fQ.BXr93Et1krYMVs-CUnf7F3ywJWFRtxYdkR7Qn4w3-to"
}
},
"com.widevine.alpha": {
"url": "https://drm-widevine-licensing.axtest.net/AcquireLicense",
"licenseHeaders": {
"X-AxDRM-Message": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ2ZXJzaW9uIjoxLCJjb21fa2V5X2lkIjoiYjMzNjRlYjUtNTFmNi00YWUzLThjOTgtMzNjZWQ1ZTMxYzc4IiwibWVzc2FnZSI6eyJ0eXBlIjoiZW50aXRsZW1lbnRfbWVzc2FnZSIsImtleXMiOlt7ImlkIjoiMDg3Mjc4NmUtZjllNy00NjVmLWEzYTItNGU1YjBlZjhmYTQ1IiwiZW5jcnlwdGVkX2tleSI6IlB3NitlRVlOY3ZqWWJmc2gzWDNmbWc9PSJ9LHsiaWQiOiJjMTRmMDcwOS1mMmI5LTQ0MjctOTE2Yi02MWI1MjU4NjUwNmEiLCJlbmNyeXB0ZWRfa2V5IjoiLzErZk5paDM4bXFSdjR5Y1l6bnQvdz09In0seyJpZCI6IjhiMDI5ZTUxLWQ1NmEtNDRiZC05MTBmLWQ0YjVmZDkwZmJhMiIsImVuY3J5cHRlZF9rZXkiOiJrcTBKdVpFanBGTjhzYVRtdDU2ME9nPT0ifSx7ImlkIjoiMmQ2ZTkzODctNjBjYS00MTQ1LWFlYzItYzQwODM3YjRiMDI2IiwiZW5jcnlwdGVkX2tleSI6IlRjUlFlQld4RW9IT0tIcmFkNFNlVlE9PSJ9LHsiaWQiOiJkZTAyZjA3Zi1hMDk4LTRlZTAtYjU1Ni05MDdjMGQxN2ZiYmMiLCJlbmNyeXB0ZWRfa2V5IjoicG9lbmNTN0dnbWVHRmVvSjZQRUFUUT09In0seyJpZCI6IjkxNGU2OWY0LTBhYjMtNDUzNC05ZTlmLTk4NTM2MTVlMjZmNiIsImVuY3J5cHRlZF9rZXkiOiJlaUkvTXNsbHJRNHdDbFJUL0xObUNBPT0ifSx7ImlkIjoiZGE0NDQ1YzItZGI1ZS00OGVmLWIwOTYtM2VmMzQ3YjE2YzdmIiwiZW5jcnlwdGVkX2tleSI6IjJ3K3pkdnFycERWM3hSMGJKeTR1Z3c9PSJ9LHsiaWQiOiIyOWYwNWU4Zi1hMWFlLTQ2ZTQtODBlOS0yMmRjZDQ0Y2Q3YTEiLCJlbmNyeXB0ZWRfa2V5IjoiL3hsU0hweHdxdTNnby9nbHBtU2dhUT09In0seyJpZCI6IjY5ZmU3MDc3LWRhZGQtNGI1NS05NmNkLWMzZWRiMzk5MTg1MyIsImVuY3J5cHRlZF9rZXkiOiJ6dTZpdXpOMnBzaTBaU3hRaUFUa1JRPT0ifV19fQ.BXr93Et1krYMVs-CUnf7F3ywJWFRtxYdkR7Qn4w3-to"
}
}
}
},
{
"name": "Axinom Clear - DASH, 4k, HEVC",
"uri": "https://media.axprod.net/TestVectors/v7-Clear/Manifest.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "Axinom Clear MultiPeriod - DASH, 4k, HEVC",
"uri": "https://media.axprod.net/TestVectors/v7-Clear/Manifest_MultiPeriod.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "DASH-IF simulated live",
"uri": "https://livesim.dashif.org/livesim/testpic_2s/Manifest.mpd",
"mimetype": "application/dash+xml",
"features": ["live"]
},
{
"name": "Tears of Steal - Widevine (Unified Streaming)",
"uri": "https://demo.unified-streaming.com/video/tears-of-steel/tears-of-steel-dash-widevine.ism/.mpd",
"mimetype": "application/dash+xml",
"features": [],
"keySystems": {
"com.widevine.alpha": "https://widevine-proxy.appspot.com/proxy"
}
},
{
"name": "Tears of Steal - PlayReady (Unified Streaming)",
"uri": "https://demo.unified-streaming.com/video/tears-of-steel/tears-of-steel-dash-playready.ism/.mpd",
"mimetype": "application/dash+xml",
"features": [],
"keySystems": {
"com.microsoft.playready": "https://test.playready.microsoft.com/service/rightsmanager.asmx"
}
},
{
"name": "Unified Streaming Live DASH",
"uri": "https://live.unified-streaming.com/scte35/scte35.isml/.mpd",
"mimetype": "application/dash+xml",
"features": ["live"]
},
{
"name": "Unified Streaming Live HLS",
"uri": "https://live.unified-streaming.com/scte35/scte35.isml/.m3u8",
"mimetype": "application/x-mpegurl",
"features": ["live"]
},
{
"name": "DOESN'T WORK - Bayerrischer Rundfunk Recorded Loop - DASH, may not always be available",
"uri": "https://irtdashreference-i.akamaihd.net/dash/live/901161/keepixo1/manifestBR2.mpd",
"mimetype": "application/dash+xml",
"features": ["live"]
},
{
"name": "DOESN'T WORK - Bayerrischer Rundfunk Recorded Loop - HLS, may not always be available",
"uri": "https://irtdashreference-i.akamaihd.net/dash/live/901161/keepixo1/playlistBR2.m3u8",
"mimetype": "application/x-mpegurl",
"features": ["live"]
},
{
"name": "Big Buck Bunny - Azure - DASH, Widevine, PlayReady",
"uri": "https://amssamples.streaming.mediaservices.windows.net/622b189f-ec39-43f2-93a2-201ac4e31ce1/BigBuckBunny.ism/manifest(format=mpd-time-csf)",
"mimetype": "application/dash+xml",
"features": [],
"keySystems": {
"com.widevine.alpha": "https://amssamples.keydelivery.mediaservices.windows.net/Widevine/?KID=1ab45440-532c-4399-94dc-5c5ad9584bac",
"com.microsoft.playready": "https://amssamples.keydelivery.mediaservices.windows.net/PlayReady/"
}
},
{
"name": "Big Buck Bunny Audio only, groups have same uri as renditons",
"uri": "https://d2zihajmogu5jn.cloudfront.net/audio-only-dupe-groups/prog_index.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Big Buck Bunny Demuxed av, audio only rendition same as group",
"uri": "https://d2zihajmogu5jn.cloudfront.net/demuxed-ts-with-audio-only-rendition/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "sidx v1 dash",
"uri": "https://d2zihajmogu5jn.cloudfront.net/sidx-v1-dash/Dog.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "fmp4 x264/flac no manifest codecs",
"uri": "https://d2zihajmogu5jn.cloudfront.net/fmp4-flac-no-manifest-codecs/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "fmp4 x264/opus no manifest codecs",
"uri": "https://d2zihajmogu5jn.cloudfront.net/fmp4-opus-no-manifest-codecs/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "fmp4 h264/aac no manifest codecs",
"uri": "https://d2zihajmogu5jn.cloudfront.net/fmp4-muxed-no-playlist-codecs/index.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "ts one valid codec among many invalid",
"uri": "https://d2zihajmogu5jn.cloudfront.net/ts-one-valid-many-invalid-codecs/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Legacy AVC Codec",
"uri": "https://d2zihajmogu5jn.cloudfront.net/legacy-avc-codec/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Pseudo-Live PDT test source",
"uri": "https://d2zihajmogu5jn.cloudfront.net/pdt-test-source/no-endlist.m3u8",
"mimetype": "application/x-mpegurl",
"features": ["live"]
},
{
"name": "PDT test source",
"uri": "https://d2zihajmogu5jn.cloudfront.net/pdt-test-source/endlist.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "audio only dash, two groups",
"uri": "https://d2zihajmogu5jn.cloudfront.net/audio-only-dash/dash.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "video only dash, two renditions",
"uri": "https://d2zihajmogu5jn.cloudfront.net/video-only-dash/dash.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "encrypted init segment",
"uri": "https://d2zihajmogu5jn.cloudfront.net/encrypted-init-segment/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Dash data uri for https://dash.akamaized.net/akamai/bbb_30fps/bbb_30fps.mpd",
"uri": "data:application/dash+xml;charset=utf-8,%3CMPD%20mediaPresentationDuration=%22PT634.566S%22%20minBufferTime=%22PT2.00S%22%20profiles=%22urn:hbbtv:dash:profile:isoff-live:2012,urn:mpeg:dash:profile:isoff-live:2011%22%20type=%22static%22%20xmlns=%22urn:mpeg:dash:schema:mpd:2011%22%20xmlns:xsi=%22http://www.w3.org/2001/XMLSchema-instance%22%20xsi:schemaLocation=%22urn:mpeg:DASH:schema:MPD:2011%20DASH-MPD.xsd%22%3E%20%3CBaseURL%3Ehttps://dash.akamaized.net/akamai/bbb_30fps/%3C/BaseURL%3E%20%3CPeriod%3E%20%20%3CAdaptationSet%20mimeType=%22video/mp4%22%20contentType=%22video%22%20subsegmentAlignment=%22true%22%20subsegmentStartsWithSAP=%221%22%20par=%2216:9%22%3E%20%20%20%3CSegmentTemplate%20duration=%22120%22%20timescale=%2230%22%20media=%22$RepresentationID$/$RepresentationID$_$Number$.m4v%22%20startNumber=%221%22%20initialization=%22$RepresentationID$/$RepresentationID$_0.m4v%22/%3E%20%20%20%3CRepresentation%20id=%22bbb_30fps_1024x576_2500k%22%20codecs=%22avc1.64001f%22%20bandwidth=%223134488%22%20width=%221024%22%20height=%22576%22%20frameRate=%2230%22%20sar=%221:1%22%20scanType=%22progressive%22/%3E%20%20%20%3CRepresentation%20id=%22bbb_30fps_1280x720_4000k%22%20codecs=%22avc1.64001f%22%20bandwidth=%224952892%22%20width=%221280%22%20height=%22720%22%20frameRate=%2230%22%20sar=%221:1%22%20scanType=%22progressive%22/%3E%20%20%20%3CRepresentation%20id=%22bbb_30fps_1920x1080_8000k%22%20codecs=%22avc1.640028%22%20bandwidth=%229914554%22%20width=%221920%22%20height=%221080%22%20frameRate=%2230%22%20sar=%221:1%22%20scanType=%22progressive%22/%3E%20%20%20%3CRepresentation%20id=%22bbb_30fps_320x180_200k%22%20codecs=%22avc1.64000d%22%20bandwidth=%22254320%22%20width=%22320%22%20height=%22180%22%20frameRate=%2230%22%20sar=%221:1%22%20scanType=%22progressive%22/%3E%20%20%20%3CRepresentation%20id=%22bbb_30fps_320x180_400k%22%20codecs=%22avc1.64000d%22%20bandwidth=%22507246%22%20width=%22320%22%20height=%22180%22%20frameRate=%2230%22%20sar=%221:1%22%20scanType=%22progressive%22/%3E%20%20%20%3CRepresentation%20id=%22bbb_30fps_480x270_600k%22%20codecs=%22avc1.640015%22%20bandwidth=%22759798%22%20width=%22480%22%20height=%22270%22%20frameRate=%2230%22%20sar=%221:1%22%20scanType=%22progressive%22/%3E%20%20%20%3CRepresentation%20id=%22bbb_30fps_640x360_1000k%22%20codecs=%22avc1.64001e%22%20bandwidth=%221254758%22%20width=%22640%22%20height=%22360%22%20frameRate=%2230%22%20sar=%221:1%22%20scanType=%22progressive%22/%3E%20%20%20%3CRepresentation%20id=%22bbb_30fps_640x360_800k%22%20codecs=%22avc1.64001e%22%20bandwidth=%221013310%22%20width=%22640%22%20height=%22360%22%20frameRate=%2230%22%20sar=%221:1%22%20scanType=%22progressive%22/%3E%20%20%20%3CRepresentation%20id=%22bbb_30fps_768x432_1500k%22%20codecs=%22avc1.64001e%22%20bandwidth=%221883700%22%20width=%22768%22%20height=%22432%22%20frameRate=%2230%22%20sar=%221:1%22%20scanType=%22progressive%22/%3E%20%20%20%3CRepresentation%20id=%22bbb_30fps_3840x2160_12000k%22%20codecs=%22avc1.640033%22%20bandwidth=%2214931538%22%20width=%223840%22%20height=%222160%22%20frameRate=%2230%22%20sar=%221:1%22%20scanType=%22progressive%22/%3E%20%20%3C/AdaptationSet%3E%20%20%3CAdaptationSet%20mimeType=%22audio/mp4%22%20contentType=%22audio%22%20subsegmentAlignment=%22true%22%20subsegmentStartsWithSAP=%221%22%3E%20%20%20%3CAccessibility%20schemeIdUri=%22urn:tva:metadata:cs:AudioPurposeCS:2007%22%20value=%226%22/%3E%20%20%20%3CRole%20schemeIdUri=%22urn:mpeg:dash:role:2011%22%20value=%22main%22/%3E%20%20%20%3CSegmentTemplate%20duration=%22192512%22%20timescale=%2248000%22%20media=%22$RepresentationID$/$RepresentationID$_$Number$.m4a%22%20startNumber=%221%22%20initialization=%22$RepresentationID$/$RepresentationID$_0.m4a%22/%3E%20%20%20%3CRepresentation%20id=%22bbb_a64k%22%20codecs=%22mp4a.40.5%22%20bandwidth=%2267071%22%20audioSamplingRate=%2248000%22%3E%20%20%20%20%3CAudioChannelConfiguration%20schemeIdUri=%22urn:mpeg:dash:23003:3:audio_channel_configuration:2011%22%20value=%222%22/%3E%20%20%20%3C/Representation%3E%20%20%3C/AdaptationSet%3E%20%3C/Period%3E%3C/MPD%3E",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "HLS data uri for https://d2zihajmogu5jn.cloudfront.net/bipbop-advanced/bipbop_16x9_variant.m3u8",
"uri": "data:application/x-mpegurl;charset=utf-8,%23EXTM3U%0D%0A%0D%0A%23EXT-X-MEDIA%3ATYPE%3DAUDIO%2CGROUP-ID%3D%22bipbop_audio%22%2CLANGUAGE%3D%22eng%22%2CNAME%3D%22BipBop%20Audio%201%22%2CAUTOSELECT%3DYES%2CDEFAULT%3DYES%0D%0A%23EXT-X-MEDIA%3ATYPE%3DAUDIO%2CGROUP-ID%3D%22bipbop_audio%22%2CLANGUAGE%3D%22eng%22%2CNAME%3D%22BipBop%20Audio%202%22%2CAUTOSELECT%3DNO%2CDEFAULT%3DNO%2CURI%3D%22https%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Falternate_audio_aac_sinewave%2Fprog_index.m3u8%22%0D%0A%0D%0A%0D%0A%23EXT-X-MEDIA%3ATYPE%3DSUBTITLES%2CGROUP-ID%3D%22subs%22%2CNAME%3D%22English%22%2CDEFAULT%3DYES%2CAUTOSELECT%3DYES%2CFORCED%3DNO%2CLANGUAGE%3D%22en%22%2CCHARACTERISTICS%3D%22public.accessibility.transcribes-spoken-dialog%2C%20public.accessibility.describes-music-and-sound%22%2CURI%3D%22https%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fsubtitles%2Feng%2Fprog_index.m3u8%22%0D%0A%23EXT-X-MEDIA%3ATYPE%3DSUBTITLES%2CGROUP-ID%3D%22subs%22%2CNAME%3D%22English%20%28Forced%29%22%2CDEFAULT%3DNO%2CAUTOSELECT%3DNO%2CFORCED%3DYES%2CLANGUAGE%3D%22en%22%2CURI%3D%22https%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fsubtitles%2Feng_forced%2Fprog_index.m3u8%22%0D%0A%23EXT-X-MEDIA%3ATYPE%3DSUBTITLES%2CGROUP-ID%3D%22subs%22%2CNAME%3D%22Fran%C3%83%C2%A7ais%22%2CDEFAULT%3DNO%2CAUTOSELECT%3DYES%2CFORCED%3DNO%2CLANGUAGE%3D%22fr%22%2CCHARACTERISTICS%3D%22public.accessibility.transcribes-spoken-dialog%2C%20public.accessibility.describes-music-and-sound%22%2CURI%3D%22https%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fsubtitles%2Ffra%2Fprog_index.m3u8%22%0D%0A%23EXT-X-MEDIA%3ATYPE%3DSUBTITLES%2CGROUP-ID%3D%22subs%22%2CNAME%3D%22Fran%C3%83%C2%A7ais%20%28Forced%29%22%2CDEFAULT%3DNO%2CAUTOSELECT%3DNO%2CFORCED%3DYES%2CLANGUAGE%3D%22fr%22%2CURI%3D%22https%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fsubtitles%2Ffra_forced%2Fprog_index.m3u8%22%0D%0A%23EXT-X-MEDIA%3ATYPE%3DSUBTITLES%2CGROUP-ID%3D%22subs%22%2CNAME%3D%22Espa%C3%83%C2%B1ol%22%2CDEFAULT%3DNO%2CAUTOSELECT%3DYES%2CFORCED%3DNO%2CLANGUAGE%3D%22es%22%2CCHARACTERISTICS%3D%22public.accessibility.transcribes-spoken-dialog%2C%20public.accessibility.describes-music-and-sound%22%2CURI%3D%22https%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fsubtitles%2Fspa%2Fprog_index.m3u8%22%0D%0A%23EXT-X-MEDIA%3ATYPE%3DSUBTITLES%2CGROUP-ID%3D%22subs%22%2CNAME%3D%22Espa%C3%83%C2%B1ol%20%28Forced%29%22%2CDEFAULT%3DNO%2CAUTOSELECT%3DNO%2CFORCED%3DYES%2CLANGUAGE%3D%22es%22%2CURI%3D%22https%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fsubtitles%2Fspa_forced%2Fprog_index.m3u8%22%0D%0A%23EXT-X-MEDIA%3ATYPE%3DSUBTITLES%2CGROUP-ID%3D%22subs%22%2CNAME%3D%22%C3%A6%C2%97%C2%A5%C3%A6%C2%9C%C2%AC%C3%A8%C2%AA%C2%9E%22%2CDEFAULT%3DNO%2CAUTOSELECT%3DYES%2CFORCED%3DNO%2CLANGUAGE%3D%22ja%22%2CCHARACTERISTICS%3D%22public.accessibility.transcribes-spoken-dialog%2C%20public.accessibility.describes-music-and-sound%22%2CURI%3D%22https%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fsubtitles%2Fjpn%2Fprog_index.m3u8%22%0D%0A%23EXT-X-MEDIA%3ATYPE%3DSUBTITLES%2CGROUP-ID%3D%22subs%22%2CNAME%3D%22%C3%A6%C2%97%C2%A5%C3%A6%C2%9C%C2%AC%C3%A8%C2%AA%C2%9E%20%28Forced%29%22%2CDEFAULT%3DNO%2CAUTOSELECT%3DNO%2CFORCED%3DYES%2CLANGUAGE%3D%22ja%22%2CURI%3D%22https%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fsubtitles%2Fjpn_forced%2Fprog_index.m3u8%22%0D%0A%0D%0A%0D%0A%23EXT-X-STREAM-INF%3ABANDWIDTH%3D263851%2CCODECS%3D%22mp4a.40.2%2C%20avc1.4d400d%22%2CRESOLUTION%3D416x234%2CAUDIO%3D%22bipbop_audio%22%2CSUBTITLES%3D%22subs%22%0D%0Ahttps%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fgear1%2Fprog_index.m3u8%0D%0A%0D%0A%23EXT-X-STREAM-INF%3ABANDWIDTH%3D577610%2CCODECS%3D%22mp4a.40.2%2C%20avc1.4d401e%22%2CRESOLUTION%3D640x360%2CAUDIO%3D%22bipbop_audio%22%2CSUBTITLES%3D%22subs%22%0D%0Ahttps%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fgear2%2Fprog_index.m3u8%0D%0A%0D%0A%23EXT-X-STREAM-INF%3ABANDWIDTH%3D915905%2CCODECS%3D%22mp4a.40.2%2C%20avc1.4d401f%22%2CRESOLUTION%3D960x540%2CAUDIO%3D%22bipbop_audio%22%2CSUBTITLES%3D%22subs%22%0D%0Ahttps%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fgear3%2Fprog_index.m3u8%0D%0A%0D%0A%23EXT-X-STREAM-INF%3ABANDWIDTH%3D1030138%2CCODECS%3D%22mp4a.40.2%2C%20avc1.4d401f%22%2CRESOLUTION%3D1280x720%2CAUDIO%3D%22bipbop_audio%22%2CSUBTITLES%3D%22subs%22%0D%0Ahttps%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fgear4%2Fprog_index.m3u8%0D%0A%0D%0A%23EXT-X-STREAM-INF%3ABANDWIDTH%3D1924009%2CCODECS%3D%22mp4a.40.2%2C%20avc1.4d401f%22%2CRESOLUTION%3D1920x1080%2CAUDIO%3D%22bipbop_audio%22%2CSUBTITLES%3D%22subs%22%0D%0Ahttps%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fgear5%2Fprog_index.m3u8%0D%0A%0D%0A%23EXT-X-STREAM-INF%3ABANDWIDTH%3D41457%2CCODECS%3D%22mp4a.40.2%22%2CAUDIO%3D%22bipbop_audio%22%2CSUBTITLES%3D%22subs%22%0D%0Ahttps%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fgear0%2Fprog_index.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "audio only ts with program Map table every other segment",
"uri": "https://d2zihajmogu5jn.cloudfront.net/first-pmt-only/index.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "HDCP v1.0 DRM dash",
"uri": "https://storage.googleapis.com/wvmedia/cenc/h264/tears/tears.mpd#1",
"mimetype": "application/dash+xml",
"features": [],
"keySystems": {
"com.widevine.alpha": {
"url": "https://proxy.uat.widevine.com/proxy?video_id=GTS_SW_SECURE_CRYPTO_HDCP_V1&provider=widevine_test"
}
}
},
{
"name": "HDCP v2.0 DRM dash",
"uri": "https://storage.googleapis.com/wvmedia/cenc/h264/tears/tears.mpd#2",
"mimetype": "application/dash+xml",
"features": [],
"keySystems": {
"com.widevine.alpha": {
"url": "https://proxy.uat.widevine.com/proxy?video_id=GTS_SW_SECURE_CRYPTO_HDCP_V2&provider=widevine_test"
}
}
},
{
"name": "HDCP v2.1 DRM dash",
"uri": "https://storage.googleapis.com/wvmedia/cenc/h264/tears/tears.mpd@21",
"mimetype": "application/dash+xml",
"features": [],
"keySystems": {
"com.widevine.alpha": {
"url": "https://proxy.uat.widevine.com/proxy?video_id=GTS_SW_SECURE_CRYPTO_HDCP_V2_1&provider=widevine_test"
}
}
},
{
"name": "HDCP v2.2 DRM dash",
"uri": "https://storage.googleapis.com/wvmedia/cenc/h264/tears/tears.mpd#22",
"mimetype": "application/dash+xml",
"features": [],
"keySystems": {
"com.widevine.alpha": {
"url": "https://proxy.uat.widevine.com/proxy?video_id=GTS_SW_SECURE_CRYPTO_HDCP_V2_2&provider=widevine_test"
}
}
}
]

View File

@ -0,0 +1,101 @@
/**
* @file ad-cue-tags.js
*/
import window from 'global/window';
/**
* Searches for an ad cue that overlaps with the given mediaTime
*
* @param {Object} track
* the track to find the cue for
*
* @param {number} mediaTime
* the time to find the cue at
*
* @return {Object|null}
* the found cue or null
*/
export const findAdCue = function(track, mediaTime) {
const cues = track.cues;
for (let i = 0; i < cues.length; i++) {
const cue = cues[i];
if (mediaTime >= cue.adStartTime && mediaTime <= cue.adEndTime) {
return cue;
}
}
return null;
};
export const updateAdCues = function(media, track, offset = 0) {
if (!media.segments) {
return;
}
let mediaTime = offset;
let cue;
for (let i = 0; i < media.segments.length; i++) {
const segment = media.segments[i];
if (!cue) {
// Since the cues will span for at least the segment duration, adding a fudge
// factor of half segment duration will prevent duplicate cues from being
// created when timing info is not exact (e.g. cue start time initialized
// at 10.006677, but next call mediaTime is 10.003332 )
cue = findAdCue(track, mediaTime + (segment.duration / 2));
}
if (cue) {
if ('cueIn' in segment) {
// Found a CUE-IN so end the cue
cue.endTime = mediaTime;
cue.adEndTime = mediaTime;
mediaTime += segment.duration;
cue = null;
continue;
}
if (mediaTime < cue.endTime) {
// Already processed this mediaTime for this cue
mediaTime += segment.duration;
continue;
}
// otherwise extend cue until a CUE-IN is found
cue.endTime += segment.duration;
} else {
if ('cueOut' in segment) {
cue = new window.VTTCue(
mediaTime,
mediaTime + segment.duration,
segment.cueOut
);
cue.adStartTime = mediaTime;
// Assumes tag format to be
// #EXT-X-CUE-OUT:30
cue.adEndTime = mediaTime + parseFloat(segment.cueOut);
track.addCue(cue);
}
if ('cueOutCont' in segment) {
// Entered into the middle of an ad cue
// Assumes tag formate to be
// #EXT-X-CUE-OUT-CONT:10/30
const [adOffset, adTotal] = segment.cueOutCont.split('/').map(parseFloat);
cue = new window.VTTCue(
mediaTime,
mediaTime + segment.duration,
''
);
cue.adStartTime = mediaTime - adOffset;
cue.adEndTime = cue.adStartTime + adTotal;
track.addCue(cue);
}
}
mediaTime += segment.duration;
}
};

View File

@ -0,0 +1,131 @@
import { isArrayBufferView } from '@videojs/vhs-utils/es/byte-helpers';
/**
* @file bin-utils.js
*/
/**
* convert a TimeRange to text
*
* @param {TimeRange} range the timerange to use for conversion
* @param {number} i the iterator on the range to convert
* @return {string} the range in string format
*/
const textRange = function(range, i) {
return range.start(i) + '-' + range.end(i);
};
/**
* format a number as hex string
*
* @param {number} e The number
* @param {number} i the iterator
* @return {string} the hex formatted number as a string
*/
const formatHexString = function(e, i) {
const value = e.toString(16);
return '00'.substring(0, 2 - value.length) + value + (i % 2 ? ' ' : '');
};
const formatAsciiString = function(e) {
if (e >= 0x20 && e < 0x7e) {
return String.fromCharCode(e);
}
return '.';
};
/**
* Creates an object for sending to a web worker modifying properties that are TypedArrays
* into a new object with seperated properties for the buffer, byteOffset, and byteLength.
*
* @param {Object} message
* Object of properties and values to send to the web worker
* @return {Object}
* Modified message with TypedArray values expanded
* @function createTransferableMessage
*/
export const createTransferableMessage = function(message) {
const transferable = {};
Object.keys(message).forEach((key) => {
const value = message[key];
if (isArrayBufferView(value)) {
transferable[key] = {
bytes: value.buffer,
byteOffset: value.byteOffset,
byteLength: value.byteLength
};
} else {
transferable[key] = value;
}
});
return transferable;
};
/**
* Returns a unique string identifier for a media initialization
* segment.
*
* @param {Object} initSegment
* the init segment object.
*
* @return {string} the generated init segment id
*/
export const initSegmentId = function(initSegment) {
const byterange = initSegment.byterange || {
length: Infinity,
offset: 0
};
return [
byterange.length, byterange.offset, initSegment.resolvedUri
].join(',');
};
/**
* Returns a unique string identifier for a media segment key.
*
* @param {Object} key the encryption key
* @return {string} the unique id for the media segment key.
*/
export const segmentKeyId = function(key) {
return key.resolvedUri;
};
/**
* utils to help dump binary data to the console
*
* @param {Array|TypedArray} data
* data to dump to a string
*
* @return {string} the data as a hex string.
*/
export const hexDump = (data) => {
const bytes = Array.prototype.slice.call(data);
const step = 16;
let result = '';
let hex;
let ascii;
for (let j = 0; j < bytes.length / step; j++) {
hex = bytes.slice(j * step, j * step + step).map(formatHexString).join('');
ascii = bytes.slice(j * step, j * step + step).map(formatAsciiString).join('');
result += hex + ' ' + ascii + '\n';
}
return result;
};
export const tagDump = ({ bytes }) => hexDump(bytes);
export const textRanges = (ranges) => {
let result = '';
let i;
for (i = 0; i < ranges.length; i++) {
result += textRange(ranges, i) + ' ';
}
return result;
};

View File

@ -0,0 +1,21 @@
export default {
GOAL_BUFFER_LENGTH: 30,
MAX_GOAL_BUFFER_LENGTH: 60,
BACK_BUFFER_LENGTH: 30,
GOAL_BUFFER_LENGTH_RATE: 1,
// 0.5 MB/s
INITIAL_BANDWIDTH: 4194304,
// A fudge factor to apply to advertised playlist bitrates to account for
// temporary flucations in client bandwidth
BANDWIDTH_VARIANCE: 1.2,
// How much of the buffer must be filled before we consider upswitching
BUFFER_LOW_WATER_LINE: 0,
MAX_BUFFER_LOW_WATER_LINE: 30,
// TODO: Remove this when experimentalBufferBasedABR is removed
EXPERIMENTAL_MAX_BUFFER_LOW_WATER_LINE: 16,
BUFFER_LOW_WATER_LINE_RATE: 1,
// If the buffer is greater than the high water line, we won't switch down
BUFFER_HIGH_WATER_LINE: 30
};

View File

@ -0,0 +1,489 @@
import resolveUrl from './resolve-url';
import window from 'global/window';
import logger from './util/logger';
import videojs from 'video.js';
/**
* A utility class for setting properties and maintaining the state of the content steering manifest.
*
* Content Steering manifest format:
* VERSION: number (required) currently only version 1 is supported.
* TTL: number in seconds (optional) until the next content steering manifest reload.
* RELOAD-URI: string (optional) uri to fetch the next content steering manifest.
* SERVICE-LOCATION-PRIORITY or PATHWAY-PRIORITY a non empty array of unique string values.
* PATHWAY-CLONES: array (optional) (HLS only) pathway clone objects to copy from other playlists.
*/
class SteeringManifest {
constructor() {
this.priority_ = [];
this.pathwayClones_ = new Map();
}
set version(number) {
// Only version 1 is currently supported for both DASH and HLS.
if (number === 1) {
this.version_ = number;
}
}
set ttl(seconds) {
// TTL = time-to-live, default = 300 seconds.
this.ttl_ = seconds || 300;
}
set reloadUri(uri) {
if (uri) {
// reload URI can be relative to the previous reloadUri.
this.reloadUri_ = resolveUrl(this.reloadUri_, uri);
}
}
set priority(array) {
// priority must be non-empty and unique values.
if (array && array.length) {
this.priority_ = array;
}
}
set pathwayClones(array) {
// pathwayClones must be non-empty.
if (array && array.length) {
this.pathwayClones_ = new Map(array.map((clone) => [clone.ID, clone]));
}
}
get version() {
return this.version_;
}
get ttl() {
return this.ttl_;
}
get reloadUri() {
return this.reloadUri_;
}
get priority() {
return this.priority_;
}
get pathwayClones() {
return this.pathwayClones_;
}
}
/**
* This class represents a content steering manifest and associated state. See both HLS and DASH specifications.
* HLS: https://developer.apple.com/streaming/HLSContentSteeringSpecification.pdf and
* https://datatracker.ietf.org/doc/draft-pantos-hls-rfc8216bis/ section 4.4.6.6.
* DASH: https://dashif.org/docs/DASH-IF-CTS-00XX-Content-Steering-Community-Review.pdf
*
* @param {function} xhr for making a network request from the browser.
* @param {function} bandwidth for fetching the current bandwidth from the main segment loader.
*/
export default class ContentSteeringController extends videojs.EventTarget {
constructor(xhr, bandwidth) {
super();
this.currentPathway = null;
this.defaultPathway = null;
this.queryBeforeStart = false;
this.availablePathways_ = new Set();
this.steeringManifest = new SteeringManifest();
this.proxyServerUrl_ = null;
this.manifestType_ = null;
this.ttlTimeout_ = null;
this.request_ = null;
this.currentPathwayClones = new Map();
this.nextPathwayClones = new Map();
this.excludedSteeringManifestURLs = new Set();
this.logger_ = logger('Content Steering');
this.xhr_ = xhr;
this.getBandwidth_ = bandwidth;
}
/**
* Assigns the content steering tag properties to the steering controller
*
* @param {string} baseUrl the baseURL from the main manifest for resolving the steering manifest url
* @param {Object} steeringTag the content steering tag from the main manifest
*/
assignTagProperties(baseUrl, steeringTag) {
this.manifestType_ = steeringTag.serverUri ? 'HLS' : 'DASH';
// serverUri is HLS serverURL is DASH
const steeringUri = steeringTag.serverUri || steeringTag.serverURL;
if (!steeringUri) {
this.logger_(`steering manifest URL is ${steeringUri}, cannot request steering manifest.`);
this.trigger('error');
return;
}
// Content steering manifests can be encoded as a data URI. We can decode, parse and return early if that's the case.
if (steeringUri.startsWith('data:')) {
this.decodeDataUriManifest_(steeringUri.substring(steeringUri.indexOf(',') + 1));
return;
}
// reloadUri is the resolution of the main manifest URL and steering URL.
this.steeringManifest.reloadUri = resolveUrl(baseUrl, steeringUri);
// pathwayId is HLS defaultServiceLocation is DASH
this.defaultPathway = steeringTag.pathwayId || steeringTag.defaultServiceLocation;
// currently only DASH supports the following properties on <ContentSteering> tags.
this.queryBeforeStart = steeringTag.queryBeforeStart;
this.proxyServerUrl_ = steeringTag.proxyServerURL;
// trigger a steering event if we have a pathway from the content steering tag.
// this tells VHS which segment pathway to start with.
// If queryBeforeStart is true we need to wait for the steering manifest response.
if (this.defaultPathway && !this.queryBeforeStart) {
this.trigger('content-steering');
}
}
/**
* Requests the content steering manifest and parse the response. This should only be called after
* assignTagProperties was called with a content steering tag.
*
* @param {string} initialUri The optional uri to make the request with.
* If set, the request should be made with exactly what is passed in this variable.
* This scenario should only happen once on initalization.
*/
requestSteeringManifest(initial) {
const reloadUri = this.steeringManifest.reloadUri;
if (!reloadUri) {
return;
}
// We currently don't support passing MPD query parameters directly to the content steering URL as this requires
// ExtUrlQueryInfo tag support. See the DASH content steering spec section 8.1.
// This request URI accounts for manifest URIs that have been excluded.
const uri = initial ? reloadUri : this.getRequestURI(reloadUri);
// If there are no valid manifest URIs, we should stop content steering.
if (!uri) {
this.logger_('No valid content steering manifest URIs. Stopping content steering.');
this.trigger('error');
this.dispose();
return;
}
const metadata = {
contentSteeringInfo: {
uri
}
};
this.trigger({ type: 'contentsteeringloadstart', metadata });
this.request_ = this.xhr_({
uri,
requestType: 'content-steering-manifest'
}, (error, errorInfo) => {
if (error) {
// If the client receives HTTP 410 Gone in response to a manifest request,
// it MUST NOT issue another request for that URI for the remainder of the
// playback session. It MAY continue to use the most-recently obtained set
// of Pathways.
if (errorInfo.status === 410) {
this.logger_(`manifest request 410 ${error}.`);
this.logger_(`There will be no more content steering requests to ${uri} this session.`);
this.excludedSteeringManifestURLs.add(uri);
return;
}
// If the client receives HTTP 429 Too Many Requests with a Retry-After
// header in response to a manifest request, it SHOULD wait until the time
// specified by the Retry-After header to reissue the request.
if (errorInfo.status === 429) {
const retrySeconds = errorInfo.responseHeaders['retry-after'];
this.logger_(`manifest request 429 ${error}.`);
this.logger_(`content steering will retry in ${retrySeconds} seconds.`);
this.startTTLTimeout_(parseInt(retrySeconds, 10));
return;
}
// If the Steering Manifest cannot be loaded and parsed correctly, the
// client SHOULD continue to use the previous values and attempt to reload
// it after waiting for the previously-specified TTL (or 5 minutes if
// none).
this.logger_(`manifest failed to load ${error}.`);
this.startTTLTimeout_();
return;
}
this.trigger({ type: 'contentsteeringloadcomplete', metadata });
let steeringManifestJson;
try {
steeringManifestJson = JSON.parse(this.request_.responseText);
} catch (parseError) {
const errorMetadata = {
errorType: videojs.Error.StreamingContentSteeringParserError,
error: parseError
};
this.trigger({ type: 'error', metadata: errorMetadata });
}
this.assignSteeringProperties_(steeringManifestJson);
const parsedMetadata = {
contentSteeringInfo: metadata.contentSteeringInfo,
contentSteeringManifest: {
version: this.steeringManifest.version,
reloadUri: this.steeringManifest.reloadUri,
priority: this.steeringManifest.priority
}
};
this.trigger({ type: 'contentsteeringparsed', metadata: parsedMetadata });
this.startTTLTimeout_();
});
}
/**
* Set the proxy server URL and add the steering manifest url as a URI encoded parameter.
*
* @param {string} steeringUrl the steering manifest url
* @return the steering manifest url to a proxy server with all parameters set
*/
setProxyServerUrl_(steeringUrl) {
const steeringUrlObject = new window.URL(steeringUrl);
const proxyServerUrlObject = new window.URL(this.proxyServerUrl_);
proxyServerUrlObject.searchParams.set('url', encodeURI(steeringUrlObject.toString()));
return this.setSteeringParams_(proxyServerUrlObject.toString());
}
/**
* Decodes and parses the data uri encoded steering manifest
*
* @param {string} dataUri the data uri to be decoded and parsed.
*/
decodeDataUriManifest_(dataUri) {
const steeringManifestJson = JSON.parse(window.atob(dataUri));
this.assignSteeringProperties_(steeringManifestJson);
}
/**
* Set the HLS or DASH content steering manifest request query parameters. For example:
* _HLS_pathway="<CURRENT-PATHWAY-ID>" and _HLS_throughput=<THROUGHPUT>
* _DASH_pathway and _DASH_throughput
*
* @param {string} uri to add content steering server parameters to.
* @return a new uri as a string with the added steering query parameters.
*/
setSteeringParams_(url) {
const urlObject = new window.URL(url);
const path = this.getPathway();
const networkThroughput = this.getBandwidth_();
if (path) {
const pathwayKey = `_${this.manifestType_}_pathway`;
urlObject.searchParams.set(pathwayKey, path);
}
if (networkThroughput) {
const throughputKey = `_${this.manifestType_}_throughput`;
urlObject.searchParams.set(throughputKey, networkThroughput);
}
return urlObject.toString();
}
/**
* Assigns the current steering manifest properties and to the SteeringManifest object
*
* @param {Object} steeringJson the raw JSON steering manifest
*/
assignSteeringProperties_(steeringJson) {
this.steeringManifest.version = steeringJson.VERSION;
if (!this.steeringManifest.version) {
this.logger_(`manifest version is ${steeringJson.VERSION}, which is not supported.`);
this.trigger('error');
return;
}
this.steeringManifest.ttl = steeringJson.TTL;
this.steeringManifest.reloadUri = steeringJson['RELOAD-URI'];
// HLS = PATHWAY-PRIORITY required. DASH = SERVICE-LOCATION-PRIORITY optional
this.steeringManifest.priority = steeringJson['PATHWAY-PRIORITY'] || steeringJson['SERVICE-LOCATION-PRIORITY'];
// Pathway clones to be created/updated in HLS.
// See section 7.2 https://datatracker.ietf.org/doc/draft-pantos-hls-rfc8216bis/
this.steeringManifest.pathwayClones = steeringJson['PATHWAY-CLONES'];
this.nextPathwayClones = this.steeringManifest.pathwayClones;
// 1. apply first pathway from the array.
// 2. if first pathway doesn't exist in manifest, try next pathway.
// a. if all pathways are exhausted, ignore the steering manifest priority.
// 3. if segments fail from an established pathway, try all variants/renditions, then exclude the failed pathway.
// a. exclude a pathway for a minimum of the last TTL duration. Meaning, from the next steering response,
// the excluded pathway will be ignored.
// See excludePathway usage in excludePlaylist().
// If there are no available pathways, we need to stop content steering.
if (!this.availablePathways_.size) {
this.logger_('There are no available pathways for content steering. Ending content steering.');
this.trigger('error');
this.dispose();
}
const chooseNextPathway = (pathwaysByPriority) => {
for (const path of pathwaysByPriority) {
if (this.availablePathways_.has(path)) {
return path;
}
}
// If no pathway matches, ignore the manifest and choose the first available.
return [...this.availablePathways_][0];
};
const nextPathway = chooseNextPathway(this.steeringManifest.priority);
if (this.currentPathway !== nextPathway) {
this.currentPathway = nextPathway;
this.trigger('content-steering');
}
}
/**
* Returns the pathway to use for steering decisions
*
* @return {string} returns the current pathway or the default
*/
getPathway() {
return this.currentPathway || this.defaultPathway;
}
/**
* Chooses the manifest request URI based on proxy URIs and server URLs.
* Also accounts for exclusion on certain manifest URIs.
*
* @param {string} reloadUri the base uri before parameters
*
* @return {string} the final URI for the request to the manifest server.
*/
getRequestURI(reloadUri) {
if (!reloadUri) {
return null;
}
const isExcluded = (uri) => this.excludedSteeringManifestURLs.has(uri);
if (this.proxyServerUrl_) {
const proxyURI = this.setProxyServerUrl_(reloadUri);
if (!isExcluded(proxyURI)) {
return proxyURI;
}
}
const steeringURI = this.setSteeringParams_(reloadUri);
if (!isExcluded(steeringURI)) {
return steeringURI;
}
// Return nothing if all valid manifest URIs are excluded.
return null;
}
/**
* Start the timeout for re-requesting the steering manifest at the TTL interval.
*
* @param {number} ttl time in seconds of the timeout. Defaults to the
* ttl interval in the steering manifest
*/
startTTLTimeout_(ttl = this.steeringManifest.ttl) {
// 300 (5 minutes) is the default value.
const ttlMS = ttl * 1000;
this.ttlTimeout_ = window.setTimeout(() => {
this.requestSteeringManifest();
}, ttlMS);
}
/**
* Clear the TTL timeout if necessary.
*/
clearTTLTimeout_() {
window.clearTimeout(this.ttlTimeout_);
this.ttlTimeout_ = null;
}
/**
* aborts any current steering xhr and sets the current request object to null
*/
abort() {
if (this.request_) {
this.request_.abort();
}
this.request_ = null;
}
/**
* aborts steering requests clears the ttl timeout and resets all properties.
*/
dispose() {
this.off('content-steering');
this.off('error');
this.abort();
this.clearTTLTimeout_();
this.currentPathway = null;
this.defaultPathway = null;
this.queryBeforeStart = null;
this.proxyServerUrl_ = null;
this.manifestType_ = null;
this.ttlTimeout_ = null;
this.request_ = null;
this.excludedSteeringManifestURLs = new Set();
this.availablePathways_ = new Set();
this.steeringManifest = new SteeringManifest();
}
/**
* adds a pathway to the available pathways set
*
* @param {string} pathway the pathway string to add
*/
addAvailablePathway(pathway) {
if (pathway) {
this.availablePathways_.add(pathway);
}
}
/**
* Clears all pathways from the available pathways set
*/
clearAvailablePathways() {
this.availablePathways_.clear();
}
/**
* Removes a pathway from the available pathways set.
*/
excludePathway(pathway) {
return this.availablePathways_.delete(pathway);
}
/**
* Checks the refreshed DASH manifest content steering tag for changes.
*
* @param {string} baseURL new steering tag on DASH manifest refresh
* @param {Object} newTag the new tag to check for changes
* @return a true or false whether the new tag has different values
*/
didDASHTagChange(baseURL, newTag) {
return !newTag && this.steeringManifest.reloadUri ||
newTag && (resolveUrl(baseURL, newTag.serverURL) !== this.steeringManifest.reloadUri ||
newTag.defaultServiceLocation !== this.defaultPathway ||
newTag.queryBeforeStart !== this.queryBeforeStart ||
newTag.proxyServerURL !== this.proxyServerUrl_);
}
getAvailablePathways() {
return this.availablePathways_;
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,41 @@
/* global self */
import { Decrypter } from 'aes-decrypter';
import { createTransferableMessage } from './bin-utils';
/**
* Our web worker interface so that things can talk to aes-decrypter
* that will be running in a web worker. the scope is passed to this by
* webworkify.
*/
self.onmessage = function(event) {
const data = event.data;
const encrypted = new Uint8Array(
data.encrypted.bytes,
data.encrypted.byteOffset,
data.encrypted.byteLength
);
const key = new Uint32Array(
data.key.bytes,
data.key.byteOffset,
data.key.byteLength / 4
);
const iv = new Uint32Array(
data.iv.bytes,
data.iv.byteOffset,
data.iv.byteLength / 4
);
/* eslint-disable no-new, handle-callback-err */
new Decrypter(
encrypted,
key,
iv,
function(err, bytes) {
self.postMessage(createTransferableMessage({
source: data.source,
decrypted: bytes
}), [bytes.buffer]);
}
);
/* eslint-enable */
};

View File

@ -0,0 +1,32 @@
import videojs from 'video.js';
// https://www.w3.org/TR/WebIDL-1/#quotaexceedederror
export const QUOTA_EXCEEDED_ERR = 22;
export const getStreamingNetworkErrorMetadata = ({ requestType, request, error, parseFailure }) => {
const isBadStatus = request.status < 200 || request.status > 299;
const isFailure = request.status >= 400 && request.status <= 499;
const errorMetadata = {
uri: request.uri,
requestType
};
const isBadStatusOrParseFailure = (isBadStatus && !isFailure) || parseFailure;
if (error && isFailure) {
// copy original error and add to the metadata.
errorMetadata.error = {...error};
errorMetadata.errorType = videojs.Error.NetworkRequestFailed;
} else if (request.aborted) {
errorMetadata.errorType = videojs.Error.NetworkRequestAborted;
} else if (request.timedout) {
errorMetadata.erroType = videojs.Error.NetworkRequestTimeout;
} else if (isBadStatusOrParseFailure) {
const errorType = parseFailure ? videojs.Error.NetworkBodyParserFailed : videojs.Error.NetworkBadStatus;
errorMetadata.errorType = errorType;
errorMetadata.status = request.status;
errorMetadata.headers = request.headers;
}
return errorMetadata;
};

View File

@ -0,0 +1,343 @@
import videojs from 'video.js';
import window from 'global/window';
import { Parser as M3u8Parser } from 'm3u8-parser';
import { resolveUrl } from './resolve-url';
import { getLastParts, isAudioOnly } from './playlist.js';
const { log } = videojs;
export const createPlaylistID = (index, uri) => {
return `${index}-${uri}`;
};
// default function for creating a group id
export const groupID = (type, group, label) => {
return `placeholder-uri-${type}-${group}-${label}`;
};
/**
* Parses a given m3u8 playlist
*
* @param {Function} [onwarn]
* a function to call when the parser triggers a warning event.
* @param {Function} [oninfo]
* a function to call when the parser triggers an info event.
* @param {string} manifestString
* The downloaded manifest string
* @param {Object[]} [customTagParsers]
* An array of custom tag parsers for the m3u8-parser instance
* @param {Object[]} [customTagMappers]
* An array of custom tag mappers for the m3u8-parser instance
* @param {boolean} [llhls]
* Whether to keep ll-hls features in the manifest after parsing.
* @return {Object}
* The manifest object
*/
export const parseManifest = ({
onwarn,
oninfo,
manifestString,
customTagParsers = [],
customTagMappers = [],
llhls
}) => {
const parser = new M3u8Parser();
if (onwarn) {
parser.on('warn', onwarn);
}
if (oninfo) {
parser.on('info', oninfo);
}
customTagParsers.forEach(customParser => parser.addParser(customParser));
customTagMappers.forEach(mapper => parser.addTagMapper(mapper));
parser.push(manifestString);
parser.end();
const manifest = parser.manifest;
// remove llhls features from the parsed manifest
// if we don't want llhls support.
if (!llhls) {
[
'preloadSegment',
'skip',
'serverControl',
'renditionReports',
'partInf',
'partTargetDuration'
].forEach(function(k) {
if (manifest.hasOwnProperty(k)) {
delete manifest[k];
}
});
if (manifest.segments) {
manifest.segments.forEach(function(segment) {
['parts', 'preloadHints'].forEach(function(k) {
if (segment.hasOwnProperty(k)) {
delete segment[k];
}
});
});
}
}
if (!manifest.targetDuration) {
let targetDuration = 10;
if (manifest.segments && manifest.segments.length) {
targetDuration = manifest
.segments.reduce((acc, s) => Math.max(acc, s.duration), 0);
}
if (onwarn) {
onwarn({ message: `manifest has no targetDuration defaulting to ${targetDuration}` });
}
manifest.targetDuration = targetDuration;
}
const parts = getLastParts(manifest);
if (parts.length && !manifest.partTargetDuration) {
const partTargetDuration = parts.reduce((acc, p) => Math.max(acc, p.duration), 0);
if (onwarn) {
onwarn({ message: `manifest has no partTargetDuration defaulting to ${partTargetDuration}` });
log.error('LL-HLS manifest has parts but lacks required #EXT-X-PART-INF:PART-TARGET value. See https://datatracker.ietf.org/doc/html/draft-pantos-hls-rfc8216bis-09#section-4.4.3.7. Playback is not guaranteed.');
}
manifest.partTargetDuration = partTargetDuration;
}
return manifest;
};
/**
* Loops through all supported media groups in main and calls the provided
* callback for each group
*
* @param {Object} main
* The parsed main manifest object
* @param {Function} callback
* Callback to call for each media group
*/
export const forEachMediaGroup = (main, callback) => {
if (!main.mediaGroups) {
return;
}
['AUDIO', 'SUBTITLES'].forEach((mediaType) => {
if (!main.mediaGroups[mediaType]) {
return;
}
for (const groupKey in main.mediaGroups[mediaType]) {
for (const labelKey in main.mediaGroups[mediaType][groupKey]) {
const mediaProperties = main.mediaGroups[mediaType][groupKey][labelKey];
callback(mediaProperties, mediaType, groupKey, labelKey);
}
}
});
};
/**
* Adds properties and attributes to the playlist to keep consistent functionality for
* playlists throughout VHS.
*
* @param {Object} config
* Arguments object
* @param {Object} config.playlist
* The media playlist
* @param {string} [config.uri]
* The uri to the media playlist (if media playlist is not from within a main
* playlist)
* @param {string} id
* ID to use for the playlist
*/
export const setupMediaPlaylist = ({ playlist, uri, id }) => {
playlist.id = id;
playlist.playlistErrors_ = 0;
if (uri) {
// For media playlists, m3u8-parser does not have access to a URI, as HLS media
// playlists do not contain their own source URI, but one is needed for consistency in
// VHS.
playlist.uri = uri;
}
// For HLS main playlists, even though certain attributes MUST be defined, the
// stream may still be played without them.
// For HLS media playlists, m3u8-parser does not attach an attributes object to the
// manifest.
//
// To avoid undefined reference errors through the project, and make the code easier
// to write/read, add an empty attributes object for these cases.
playlist.attributes = playlist.attributes || {};
};
/**
* Adds ID, resolvedUri, and attributes properties to each playlist of the main, where
* necessary. In addition, creates playlist IDs for each playlist and adds playlist ID to
* playlist references to the playlists array.
*
* @param {Object} main
* The main playlist
*/
export const setupMediaPlaylists = (main) => {
let i = main.playlists.length;
while (i--) {
const playlist = main.playlists[i];
setupMediaPlaylist({
playlist,
id: createPlaylistID(i, playlist.uri)
});
playlist.resolvedUri = resolveUrl(main.uri, playlist.uri);
main.playlists[playlist.id] = playlist;
// URI reference added for backwards compatibility
main.playlists[playlist.uri] = playlist;
// Although the spec states an #EXT-X-STREAM-INF tag MUST have a BANDWIDTH attribute,
// the stream can be played without it. Although an attributes property may have been
// added to the playlist to prevent undefined references, issue a warning to fix the
// manifest.
if (!playlist.attributes.BANDWIDTH) {
log.warn('Invalid playlist STREAM-INF detected. Missing BANDWIDTH attribute.');
}
}
};
/**
* Adds resolvedUri properties to each media group.
*
* @param {Object} main
* The main playlist
*/
export const resolveMediaGroupUris = (main) => {
forEachMediaGroup(main, (properties) => {
if (properties.uri) {
properties.resolvedUri = resolveUrl(main.uri, properties.uri);
}
});
};
/**
* Creates a main playlist wrapper to insert a sole media playlist into.
*
* @param {Object} media
* Media playlist
* @param {string} uri
* The media URI
*
* @return {Object}
* main playlist
*/
export const mainForMedia = (media, uri) => {
const id = createPlaylistID(0, uri);
const main = {
mediaGroups: {
'AUDIO': {},
'VIDEO': {},
'CLOSED-CAPTIONS': {},
'SUBTITLES': {}
},
uri: window.location.href,
resolvedUri: window.location.href,
playlists: [{
uri,
id,
resolvedUri: uri,
// m3u8-parser does not attach an attributes property to media playlists so make
// sure that the property is attached to avoid undefined reference errors
attributes: {}
}]
};
// set up ID reference
main.playlists[id] = main.playlists[0];
// URI reference added for backwards compatibility
main.playlists[uri] = main.playlists[0];
return main;
};
/**
* Does an in-place update of the main manifest to add updated playlist URI references
* as well as other properties needed by VHS that aren't included by the parser.
*
* @param {Object} main
* main manifest object
* @param {string} uri
* The source URI
* @param {function} createGroupID
* A function to determine how to create the groupID for mediaGroups
*/
export const addPropertiesToMain = (main, uri, createGroupID = groupID) => {
main.uri = uri;
for (let i = 0; i < main.playlists.length; i++) {
if (!main.playlists[i].uri) {
// Set up phony URIs for the playlists since playlists are referenced by their URIs
// throughout VHS, but some formats (e.g., DASH) don't have external URIs
// TODO: consider adding dummy URIs in mpd-parser
const phonyUri = `placeholder-uri-${i}`;
main.playlists[i].uri = phonyUri;
}
}
const audioOnlyMain = isAudioOnly(main);
forEachMediaGroup(main, (properties, mediaType, groupKey, labelKey) => {
// add a playlist array under properties
if (!properties.playlists || !properties.playlists.length) {
// If the manifest is audio only and this media group does not have a uri, check
// if the media group is located in the main list of playlists. If it is, don't add
// placeholder properties as it shouldn't be considered an alternate audio track.
if (audioOnlyMain && mediaType === 'AUDIO' && !properties.uri) {
for (let i = 0; i < main.playlists.length; i++) {
const p = main.playlists[i];
if (p.attributes && p.attributes.AUDIO && p.attributes.AUDIO === groupKey) {
return;
}
}
}
properties.playlists = [Object.assign({}, properties)];
}
properties.playlists.forEach(function(p, i) {
const groupId = createGroupID(mediaType, groupKey, labelKey, p);
const id = createPlaylistID(i, groupId);
if (p.uri) {
p.resolvedUri = p.resolvedUri || resolveUrl(main.uri, p.uri);
} else {
// DEPRECATED, this has been added to prevent a breaking change.
// previously we only ever had a single media group playlist, so
// we mark the first playlist uri without prepending the index as we used to
// ideally we would do all of the playlists the same way.
p.uri = i === 0 ? groupId : id;
// don't resolve a placeholder uri to an absolute url, just use
// the placeholder again
p.resolvedUri = p.uri;
}
p.id = p.id || id;
// add an empty attributes object, all playlists are
// expected to have this.
p.attributes = p.attributes || {};
// setup ID and URI references (URI for backwards compatibility)
main.playlists[p.id] = p;
main.playlists[p.uri] = p;
});
});
setupMediaPlaylists(main);
resolveMediaGroupUris(main);
};

View File

@ -0,0 +1,961 @@
import videojs from 'video.js';
import PlaylistLoader from './playlist-loader';
import DashPlaylistLoader from './dash-playlist-loader';
import noop from './util/noop';
import {isAudioOnly, playlistMatch} from './playlist.js';
import logger from './util/logger';
import {merge} from './util/vjs-compat';
/**
* Convert the properties of an HLS track into an audioTrackKind.
*
* @private
*/
const audioTrackKind_ = (properties) => {
let kind = properties.default ? 'main' : 'alternative';
if (properties.characteristics &&
properties.characteristics.indexOf('public.accessibility.describes-video') >= 0) {
kind = 'main-desc';
}
return kind;
};
/**
* Pause provided segment loader and playlist loader if active
*
* @param {SegmentLoader} segmentLoader
* SegmentLoader to pause
* @param {Object} mediaType
* Active media type
* @function stopLoaders
*/
export const stopLoaders = (segmentLoader, mediaType) => {
segmentLoader.abort();
segmentLoader.pause();
if (mediaType && mediaType.activePlaylistLoader) {
mediaType.activePlaylistLoader.pause();
mediaType.activePlaylistLoader = null;
}
};
/**
* Start loading provided segment loader and playlist loader
*
* @param {PlaylistLoader} playlistLoader
* PlaylistLoader to start loading
* @param {Object} mediaType
* Active media type
* @function startLoaders
*/
export const startLoaders = (playlistLoader, mediaType) => {
// Segment loader will be started after `loadedmetadata` or `loadedplaylist` from the
// playlist loader
mediaType.activePlaylistLoader = playlistLoader;
playlistLoader.load();
};
/**
* Returns a function to be called when the media group changes. It performs a
* non-destructive (preserve the buffer) resync of the SegmentLoader. This is because a
* change of group is merely a rendition switch of the same content at another encoding,
* rather than a change of content, such as switching audio from English to Spanish.
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @return {Function}
* Handler for a non-destructive resync of SegmentLoader when the active media
* group changes.
* @function onGroupChanged
*/
export const onGroupChanged = (type, settings) => () => {
const {
segmentLoaders: {
[type]: segmentLoader,
main: mainSegmentLoader
},
mediaTypes: { [type]: mediaType }
} = settings;
const activeTrack = mediaType.activeTrack();
const activeGroup = mediaType.getActiveGroup();
const previousActiveLoader = mediaType.activePlaylistLoader;
const lastGroup = mediaType.lastGroup_;
// the group did not change do nothing
if (activeGroup && lastGroup && activeGroup.id === lastGroup.id) {
return;
}
mediaType.lastGroup_ = activeGroup;
mediaType.lastTrack_ = activeTrack;
stopLoaders(segmentLoader, mediaType);
if (!activeGroup || activeGroup.isMainPlaylist) {
// there is no group active or active group is a main playlist and won't change
return;
}
if (!activeGroup.playlistLoader) {
if (previousActiveLoader) {
// The previous group had a playlist loader but the new active group does not
// this means we are switching from demuxed to muxed audio. In this case we want to
// do a destructive reset of the main segment loader and not restart the audio
// loaders.
mainSegmentLoader.resetEverything();
}
return;
}
// Non-destructive resync
segmentLoader.resyncLoader();
startLoaders(activeGroup.playlistLoader, mediaType);
};
export const onGroupChanging = (type, settings) => () => {
const {
segmentLoaders: {
[type]: segmentLoader
},
mediaTypes: { [type]: mediaType }
} = settings;
mediaType.lastGroup_ = null;
segmentLoader.abort();
segmentLoader.pause();
};
/**
* Returns a function to be called when the media track changes. It performs a
* destructive reset of the SegmentLoader to ensure we start loading as close to
* currentTime as possible.
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @return {Function}
* Handler for a destructive reset of SegmentLoader when the active media
* track changes.
* @function onTrackChanged
*/
export const onTrackChanged = (type, settings) => () => {
const {
mainPlaylistLoader,
segmentLoaders: {
[type]: segmentLoader,
main: mainSegmentLoader
},
mediaTypes: { [type]: mediaType }
} = settings;
const activeTrack = mediaType.activeTrack();
const activeGroup = mediaType.getActiveGroup();
const previousActiveLoader = mediaType.activePlaylistLoader;
const lastTrack = mediaType.lastTrack_;
// track did not change, do nothing
if (lastTrack && activeTrack && lastTrack.id === activeTrack.id) {
return;
}
mediaType.lastGroup_ = activeGroup;
mediaType.lastTrack_ = activeTrack;
stopLoaders(segmentLoader, mediaType);
if (!activeGroup) {
// there is no group active so we do not want to restart loaders
return;
}
if (activeGroup.isMainPlaylist) {
// track did not change, do nothing
if (!activeTrack || !lastTrack || activeTrack.id === lastTrack.id) {
return;
}
const pc = settings.vhs.playlistController_;
const newPlaylist = pc.selectPlaylist();
// media will not change do nothing
if (pc.media() === newPlaylist) {
return;
}
mediaType.logger_(`track change. Switching main audio from ${lastTrack.id} to ${activeTrack.id}`);
mainPlaylistLoader.pause();
mainSegmentLoader.resetEverything();
pc.fastQualityChange_(newPlaylist);
return;
}
if (type === 'AUDIO') {
if (!activeGroup.playlistLoader) {
// when switching from demuxed audio/video to muxed audio/video (noted by no
// playlist loader for the audio group), we want to do a destructive reset of the
// main segment loader and not restart the audio loaders
mainSegmentLoader.setAudio(true);
// don't have to worry about disabling the audio of the audio segment loader since
// it should be stopped
mainSegmentLoader.resetEverything();
return;
}
// although the segment loader is an audio segment loader, call the setAudio
// function to ensure it is prepared to re-append the init segment (or handle other
// config changes)
segmentLoader.setAudio(true);
mainSegmentLoader.setAudio(false);
}
if (previousActiveLoader === activeGroup.playlistLoader) {
// Nothing has actually changed. This can happen because track change events can fire
// multiple times for a "single" change. One for enabling the new active track, and
// one for disabling the track that was active
startLoaders(activeGroup.playlistLoader, mediaType);
return;
}
if (segmentLoader.track) {
// For WebVTT, set the new text track in the segmentloader
segmentLoader.track(activeTrack);
}
// destructive reset
segmentLoader.resetEverything();
startLoaders(activeGroup.playlistLoader, mediaType);
};
export const onError = {
/**
* Returns a function to be called when a SegmentLoader or PlaylistLoader encounters
* an error.
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @return {Function}
* Error handler. Logs warning (or error if the playlist is excluded) to
* console and switches back to default audio track.
* @function onError.AUDIO
*/
AUDIO: (type, settings) => () => {
const {
mediaTypes: { [type]: mediaType },
excludePlaylist
} = settings;
// switch back to default audio track
const activeTrack = mediaType.activeTrack();
const activeGroup = mediaType.activeGroup();
const id = (activeGroup.filter(group => group.default)[0] || activeGroup[0]).id;
const defaultTrack = mediaType.tracks[id];
if (activeTrack === defaultTrack) {
// Default track encountered an error. All we can do now is exclude the current
// rendition and hope another will switch audio groups
excludePlaylist({
error: { message: 'Problem encountered loading the default audio track.' }
});
return;
}
videojs.log.warn('Problem encountered loading the alternate audio track.' +
'Switching back to default.');
for (const trackId in mediaType.tracks) {
mediaType.tracks[trackId].enabled = mediaType.tracks[trackId] === defaultTrack;
}
mediaType.onTrackChanged();
},
/**
* Returns a function to be called when a SegmentLoader or PlaylistLoader encounters
* an error.
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @return {Function}
* Error handler. Logs warning to console and disables the active subtitle track
* @function onError.SUBTITLES
*/
SUBTITLES: (type, settings) => () => {
const {
mediaTypes: { [type]: mediaType }
} = settings;
videojs.log.warn('Problem encountered loading the subtitle track.' +
'Disabling subtitle track.');
const track = mediaType.activeTrack();
if (track) {
track.mode = 'disabled';
}
mediaType.onTrackChanged();
}
};
export const setupListeners = {
/**
* Setup event listeners for audio playlist loader
*
* @param {string} type
* MediaGroup type
* @param {PlaylistLoader|null} playlistLoader
* PlaylistLoader to register listeners on
* @param {Object} settings
* Object containing required information for media groups
* @function setupListeners.AUDIO
*/
AUDIO: (type, playlistLoader, settings) => {
if (!playlistLoader) {
// no playlist loader means audio will be muxed with the video
return;
}
const {
tech,
requestOptions,
segmentLoaders: { [type]: segmentLoader }
} = settings;
playlistLoader.on('loadedmetadata', () => {
const media = playlistLoader.media();
segmentLoader.playlist(media, requestOptions);
// if the video is already playing, or if this isn't a live video and preload
// permits, start downloading segments
if (!tech.paused() || (media.endList && tech.preload() !== 'none')) {
segmentLoader.load();
}
});
playlistLoader.on('loadedplaylist', () => {
segmentLoader.playlist(playlistLoader.media(), requestOptions);
// If the player isn't paused, ensure that the segment loader is running
if (!tech.paused()) {
segmentLoader.load();
}
});
playlistLoader.on('error', onError[type](type, settings));
},
/**
* Setup event listeners for subtitle playlist loader
*
* @param {string} type
* MediaGroup type
* @param {PlaylistLoader|null} playlistLoader
* PlaylistLoader to register listeners on
* @param {Object} settings
* Object containing required information for media groups
* @function setupListeners.SUBTITLES
*/
SUBTITLES: (type, playlistLoader, settings) => {
const {
tech,
requestOptions,
segmentLoaders: { [type]: segmentLoader },
mediaTypes: { [type]: mediaType }
} = settings;
playlistLoader.on('loadedmetadata', () => {
const media = playlistLoader.media();
segmentLoader.playlist(media, requestOptions);
segmentLoader.track(mediaType.activeTrack());
// if the video is already playing, or if this isn't a live video and preload
// permits, start downloading segments
if (!tech.paused() || (media.endList && tech.preload() !== 'none')) {
segmentLoader.load();
}
});
playlistLoader.on('loadedplaylist', () => {
segmentLoader.playlist(playlistLoader.media(), requestOptions);
// If the player isn't paused, ensure that the segment loader is running
if (!tech.paused()) {
segmentLoader.load();
}
});
playlistLoader.on('error', onError[type](type, settings));
}
};
export const initialize = {
/**
* Setup PlaylistLoaders and AudioTracks for the audio groups
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @function initialize.AUDIO
*/
'AUDIO': (type, settings) => {
const {
vhs,
sourceType,
segmentLoaders: { [type]: segmentLoader },
requestOptions,
main: {mediaGroups},
mediaTypes: {
[type]: {
groups,
tracks,
logger_
}
},
mainPlaylistLoader
} = settings;
const audioOnlyMain = isAudioOnly(mainPlaylistLoader.main);
// force a default if we have none
if (!mediaGroups[type] ||
Object.keys(mediaGroups[type]).length === 0) {
mediaGroups[type] = { main: { default: { default: true } } };
if (audioOnlyMain) {
mediaGroups[type].main.default.playlists = mainPlaylistLoader.main.playlists;
}
}
for (const groupId in mediaGroups[type]) {
if (!groups[groupId]) {
groups[groupId] = [];
}
for (const variantLabel in mediaGroups[type][groupId]) {
let properties = mediaGroups[type][groupId][variantLabel];
let playlistLoader;
if (audioOnlyMain) {
logger_(`AUDIO group '${groupId}' label '${variantLabel}' is a main playlist`);
properties.isMainPlaylist = true;
playlistLoader = null;
// if vhs-json was provided as the source, and the media playlist was resolved,
// use the resolved media playlist object
} else if (sourceType === 'vhs-json' && properties.playlists) {
playlistLoader = new PlaylistLoader(
properties.playlists[0],
vhs,
requestOptions
);
} else if (properties.resolvedUri) {
playlistLoader = new PlaylistLoader(
properties.resolvedUri,
vhs,
requestOptions
);
// TODO: dash isn't the only type with properties.playlists
// should we even have properties.playlists in this check.
} else if (properties.playlists && sourceType === 'dash') {
playlistLoader = new DashPlaylistLoader(
properties.playlists[0],
vhs,
requestOptions,
mainPlaylistLoader
);
} else {
// no resolvedUri means the audio is muxed with the video when using this
// audio track
playlistLoader = null;
}
properties = merge(
{ id: variantLabel, playlistLoader },
properties
);
setupListeners[type](type, properties.playlistLoader, settings);
groups[groupId].push(properties);
if (typeof tracks[variantLabel] === 'undefined') {
const track = new videojs.AudioTrack({
id: variantLabel,
kind: audioTrackKind_(properties),
enabled: false,
language: properties.language,
default: properties.default,
label: variantLabel
});
tracks[variantLabel] = track;
}
}
}
// setup single error event handler for the segment loader
segmentLoader.on('error', onError[type](type, settings));
},
/**
* Setup PlaylistLoaders and TextTracks for the subtitle groups
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @function initialize.SUBTITLES
*/
'SUBTITLES': (type, settings) => {
const {
tech,
vhs,
sourceType,
segmentLoaders: { [type]: segmentLoader },
requestOptions,
main: { mediaGroups },
mediaTypes: {
[type]: {
groups,
tracks
}
},
mainPlaylistLoader
} = settings;
for (const groupId in mediaGroups[type]) {
if (!groups[groupId]) {
groups[groupId] = [];
}
for (const variantLabel in mediaGroups[type][groupId]) {
if (!vhs.options_.useForcedSubtitles && mediaGroups[type][groupId][variantLabel].forced) {
// Subtitle playlists with the forced attribute are not selectable in Safari.
// According to Apple's HLS Authoring Specification:
// If content has forced subtitles and regular subtitles in a given language,
// the regular subtitles track in that language MUST contain both the forced
// subtitles and the regular subtitles for that language.
// Because of this requirement and that Safari does not add forced subtitles,
// forced subtitles are skipped here to maintain consistent experience across
// all platforms
continue;
}
let properties = mediaGroups[type][groupId][variantLabel];
let playlistLoader;
if (sourceType === 'hls') {
playlistLoader =
new PlaylistLoader(properties.resolvedUri, vhs, requestOptions);
} else if (sourceType === 'dash') {
const playlists = properties.playlists.filter((p) => p.excludeUntil !== Infinity);
if (!playlists.length) {
return;
}
playlistLoader = new DashPlaylistLoader(
properties.playlists[0],
vhs,
requestOptions,
mainPlaylistLoader
);
} else if (sourceType === 'vhs-json') {
playlistLoader = new PlaylistLoader(
// if the vhs-json object included the media playlist, use the media playlist
// as provided, otherwise use the resolved URI to load the playlist
properties.playlists ? properties.playlists[0] : properties.resolvedUri,
vhs,
requestOptions
);
}
properties = merge({
id: variantLabel,
playlistLoader
}, properties);
setupListeners[type](type, properties.playlistLoader, settings);
groups[groupId].push(properties);
if (typeof tracks[variantLabel] === 'undefined') {
const track = tech.addRemoteTextTrack({
id: variantLabel,
kind: 'subtitles',
default: properties.default && properties.autoselect,
language: properties.language,
label: variantLabel
}, false).track;
tracks[variantLabel] = track;
}
}
}
// setup single error event handler for the segment loader
segmentLoader.on('error', onError[type](type, settings));
},
/**
* Setup TextTracks for the closed-caption groups
*
* @param {String} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @function initialize['CLOSED-CAPTIONS']
*/
'CLOSED-CAPTIONS': (type, settings) => {
const {
tech,
main: { mediaGroups },
mediaTypes: {
[type]: {
groups,
tracks
}
}
} = settings;
for (const groupId in mediaGroups[type]) {
if (!groups[groupId]) {
groups[groupId] = [];
}
for (const variantLabel in mediaGroups[type][groupId]) {
const properties = mediaGroups[type][groupId][variantLabel];
// Look for either 608 (CCn) or 708 (SERVICEn) caption services
if (!/^(?:CC|SERVICE)/.test(properties.instreamId)) {
continue;
}
const captionServices = tech.options_.vhs && tech.options_.vhs.captionServices || {};
let newProps = {
label: variantLabel,
language: properties.language,
instreamId: properties.instreamId,
default: properties.default && properties.autoselect
};
if (captionServices[newProps.instreamId]) {
newProps = merge(newProps, captionServices[newProps.instreamId]);
}
if (newProps.default === undefined) {
delete newProps.default;
}
// No PlaylistLoader is required for Closed-Captions because the captions are
// embedded within the video stream
groups[groupId].push(merge({ id: variantLabel }, properties));
if (typeof tracks[variantLabel] === 'undefined') {
const track = tech.addRemoteTextTrack({
id: newProps.instreamId,
kind: 'captions',
default: newProps.default,
language: newProps.language,
label: newProps.label
}, false).track;
tracks[variantLabel] = track;
}
}
}
}
};
const groupMatch = (list, media) => {
for (let i = 0; i < list.length; i++) {
if (playlistMatch(media, list[i])) {
return true;
}
if (list[i].playlists && groupMatch(list[i].playlists, media)) {
return true;
}
}
return false;
};
/**
* Returns a function used to get the active group of the provided type
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @return {Function}
* Function that returns the active media group for the provided type. Takes an
* optional parameter {TextTrack} track. If no track is provided, a list of all
* variants in the group, otherwise the variant corresponding to the provided
* track is returned.
* @function activeGroup
*/
export const activeGroup = (type, settings) => (track) => {
const {
mainPlaylistLoader,
mediaTypes: { [type]: { groups } }
} = settings;
const media = mainPlaylistLoader.media();
if (!media) {
return null;
}
let variants = null;
// set to variants to main media active group
if (media.attributes[type]) {
variants = groups[media.attributes[type]];
}
const groupKeys = Object.keys(groups);
if (!variants) {
// find the mainPlaylistLoader media
// that is in a media group if we are dealing
// with audio only
if (type === 'AUDIO' && groupKeys.length > 1 && isAudioOnly(settings.main)) {
for (let i = 0; i < groupKeys.length; i++) {
const groupPropertyList = groups[groupKeys[i]];
if (groupMatch(groupPropertyList, media)) {
variants = groupPropertyList;
break;
}
}
// use the main group if it exists
} else if (groups.main) {
variants = groups.main;
// only one group, use that one
} else if (groupKeys.length === 1) {
variants = groups[groupKeys[0]];
}
}
if (typeof track === 'undefined') {
return variants;
}
if (track === null || !variants) {
// An active track was specified so a corresponding group is expected. track === null
// means no track is currently active so there is no corresponding group
return null;
}
return variants.filter((props) => props.id === track.id)[0] || null;
};
export const activeTrack = {
/**
* Returns a function used to get the active track of type provided
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @return {Function}
* Function that returns the active media track for the provided type. Returns
* null if no track is active
* @function activeTrack.AUDIO
*/
AUDIO: (type, settings) => () => {
const { mediaTypes: { [type]: { tracks } } } = settings;
for (const id in tracks) {
if (tracks[id].enabled) {
return tracks[id];
}
}
return null;
},
/**
* Returns a function used to get the active track of type provided
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @return {Function}
* Function that returns the active media track for the provided type. Returns
* null if no track is active
* @function activeTrack.SUBTITLES
*/
SUBTITLES: (type, settings) => () => {
const { mediaTypes: { [type]: { tracks } } } = settings;
for (const id in tracks) {
if (tracks[id].mode === 'showing' || tracks[id].mode === 'hidden') {
return tracks[id];
}
}
return null;
}
};
export const getActiveGroup = (type, {mediaTypes}) => () => {
const activeTrack_ = mediaTypes[type].activeTrack();
if (!activeTrack_) {
return null;
}
return mediaTypes[type].activeGroup(activeTrack_);
};
/**
* Setup PlaylistLoaders and Tracks for media groups (Audio, Subtitles,
* Closed-Captions) specified in the main manifest.
*
* @param {Object} settings
* Object containing required information for setting up the media groups
* @param {Tech} settings.tech
* The tech of the player
* @param {Object} settings.requestOptions
* XHR request options used by the segment loaders
* @param {PlaylistLoader} settings.mainPlaylistLoader
* PlaylistLoader for the main source
* @param {VhsHandler} settings.vhs
* VHS SourceHandler
* @param {Object} settings.main
* The parsed main manifest
* @param {Object} settings.mediaTypes
* Object to store the loaders, tracks, and utility methods for each media type
* @param {Function} settings.excludePlaylist
* Excludes the current rendition and forces a rendition switch.
* @function setupMediaGroups
*/
export const setupMediaGroups = (settings) => {
['AUDIO', 'SUBTITLES', 'CLOSED-CAPTIONS'].forEach((type) => {
initialize[type](type, settings);
});
const {
mediaTypes,
mainPlaylistLoader,
tech,
vhs,
segmentLoaders: {
['AUDIO']: audioSegmentLoader,
main: mainSegmentLoader
}
} = settings;
// setup active group and track getters and change event handlers
['AUDIO', 'SUBTITLES'].forEach((type) => {
mediaTypes[type].activeGroup = activeGroup(type, settings);
mediaTypes[type].activeTrack = activeTrack[type](type, settings);
mediaTypes[type].onGroupChanged = onGroupChanged(type, settings);
mediaTypes[type].onGroupChanging = onGroupChanging(type, settings);
mediaTypes[type].onTrackChanged = onTrackChanged(type, settings);
mediaTypes[type].getActiveGroup = getActiveGroup(type, settings);
});
// DO NOT enable the default subtitle or caption track.
// DO enable the default audio track
const audioGroup = mediaTypes.AUDIO.activeGroup();
if (audioGroup) {
const groupId = (audioGroup.filter(group => group.default)[0] || audioGroup[0]).id;
mediaTypes.AUDIO.tracks[groupId].enabled = true;
mediaTypes.AUDIO.onGroupChanged();
mediaTypes.AUDIO.onTrackChanged();
const activeAudioGroup = mediaTypes.AUDIO.getActiveGroup();
// a similar check for handling setAudio on each loader is run again each time the
// track is changed, but needs to be handled here since the track may not be considered
// changed on the first call to onTrackChanged
if (!activeAudioGroup.playlistLoader) {
// either audio is muxed with video or the stream is audio only
mainSegmentLoader.setAudio(true);
} else {
// audio is demuxed
mainSegmentLoader.setAudio(false);
audioSegmentLoader.setAudio(true);
}
}
mainPlaylistLoader.on('mediachange', () => {
['AUDIO', 'SUBTITLES'].forEach(type => mediaTypes[type].onGroupChanged());
});
mainPlaylistLoader.on('mediachanging', () => {
['AUDIO', 'SUBTITLES'].forEach(type => mediaTypes[type].onGroupChanging());
});
// custom audio track change event handler for usage event
const onAudioTrackChanged = () => {
mediaTypes.AUDIO.onTrackChanged();
tech.trigger({ type: 'usage', name: 'vhs-audio-change' });
};
tech.audioTracks().addEventListener('change', onAudioTrackChanged);
tech.remoteTextTracks().addEventListener(
'change',
mediaTypes.SUBTITLES.onTrackChanged
);
vhs.on('dispose', () => {
tech.audioTracks().removeEventListener('change', onAudioTrackChanged);
tech.remoteTextTracks().removeEventListener(
'change',
mediaTypes.SUBTITLES.onTrackChanged
);
});
// clear existing audio tracks and add the ones we just created
tech.clearTracks('audio');
for (const id in mediaTypes.AUDIO.tracks) {
tech.audioTracks().addTrack(mediaTypes.AUDIO.tracks[id]);
}
};
/**
* Creates skeleton object used to store the loaders, tracks, and utility methods for each
* media type
*
* @return {Object}
* Object to store the loaders, tracks, and utility methods for each media type
* @function createMediaTypes
*/
export const createMediaTypes = () => {
const mediaTypes = {};
['AUDIO', 'SUBTITLES', 'CLOSED-CAPTIONS'].forEach((type) => {
mediaTypes[type] = {
groups: {},
tracks: {},
activePlaylistLoader: null,
activeGroup: noop,
activeTrack: noop,
getActiveGroup: noop,
onGroupChanged: noop,
onTrackChanged: noop,
lastTrack_: null,
logger_: logger(`MediaGroups[${type}]`)
};
});
return mediaTypes;
};

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,658 @@
/**
* @file playback-watcher.js
*
* Playback starts, and now my watch begins. It shall not end until my death. I shall
* take no wait, hold no uncleared timeouts, father no bad seeks. I shall wear no crowns
* and win no glory. I shall live and die at my post. I am the corrector of the underflow.
* I am the watcher of gaps. I am the shield that guards the realms of seekable. I pledge
* my life and honor to the Playback Watch, for this Player and all the Players to come.
*/
import window from 'global/window';
import * as Ranges from './ranges';
import logger from './util/logger';
import { createTimeRanges } from './util/vjs-compat';
import videojs from 'video.js';
// Set of events that reset the playback-watcher time check logic and clear the timeout
const timerCancelEvents = [
'seeking',
'seeked',
'pause',
'playing',
'error'
];
/**
* @class PlaybackWatcher
*/
export default class PlaybackWatcher extends videojs.EventTarget {
/**
* Represents an PlaybackWatcher object.
*
* @class
* @param {Object} options an object that includes the tech and settings
*/
constructor(options) {
super();
this.playlistController_ = options.playlistController;
this.tech_ = options.tech;
this.seekable = options.seekable;
this.allowSeeksWithinUnsafeLiveWindow = options.allowSeeksWithinUnsafeLiveWindow;
this.liveRangeSafeTimeDelta = options.liveRangeSafeTimeDelta;
this.media = options.media;
this.playedRanges_ = [];
this.consecutiveUpdates = 0;
this.lastRecordedTime = null;
this.checkCurrentTimeTimeout_ = null;
this.logger_ = logger('PlaybackWatcher');
this.logger_('initialize');
const playHandler = () => this.monitorCurrentTime_();
const canPlayHandler = () => this.monitorCurrentTime_();
const waitingHandler = () => this.techWaiting_();
const cancelTimerHandler = () => this.resetTimeUpdate_();
const pc = this.playlistController_;
const loaderTypes = ['main', 'subtitle', 'audio'];
const loaderChecks = {};
loaderTypes.forEach((type) => {
loaderChecks[type] = {
reset: () => this.resetSegmentDownloads_(type),
updateend: () => this.checkSegmentDownloads_(type)
};
pc[`${type}SegmentLoader_`].on('appendsdone', loaderChecks[type].updateend);
// If a rendition switch happens during a playback stall where the buffer
// isn't changing we want to reset. We cannot assume that the new rendition
// will also be stalled, until after new appends.
pc[`${type}SegmentLoader_`].on('playlistupdate', loaderChecks[type].reset);
// Playback stalls should not be detected right after seeking.
// This prevents one segment playlists (single vtt or single segment content)
// from being detected as stalling. As the buffer will not change in those cases, since
// the buffer is the entire video duration.
this.tech_.on(['seeked', 'seeking'], loaderChecks[type].reset);
});
/**
* We check if a seek was into a gap through the following steps:
* 1. We get a seeking event and we do not get a seeked event. This means that
* a seek was attempted but not completed.
* 2. We run `fixesBadSeeks_` on segment loader appends. This means that we already
* removed everything from our buffer and appended a segment, and should be ready
* to check for gaps.
*/
const setSeekingHandlers = (fn) => {
['main', 'audio'].forEach((type) => {
pc[`${type}SegmentLoader_`][fn]('appended', this.seekingAppendCheck_);
});
};
this.seekingAppendCheck_ = () => {
if (this.fixesBadSeeks_()) {
this.consecutiveUpdates = 0;
this.lastRecordedTime = this.tech_.currentTime();
setSeekingHandlers('off');
}
};
this.clearSeekingAppendCheck_ = () => setSeekingHandlers('off');
this.watchForBadSeeking_ = () => {
this.clearSeekingAppendCheck_();
setSeekingHandlers('on');
};
this.tech_.on('seeked', this.clearSeekingAppendCheck_);
this.tech_.on('seeking', this.watchForBadSeeking_);
this.tech_.on('waiting', waitingHandler);
this.tech_.on(timerCancelEvents, cancelTimerHandler);
this.tech_.on('canplay', canPlayHandler);
/*
An edge case exists that results in gaps not being skipped when they exist at the beginning of a stream. This case
is surfaced in one of two ways:
1) The `waiting` event is fired before the player has buffered content, making it impossible
to find or skip the gap. The `waiting` event is followed by a `play` event. On first play
we can check if playback is stalled due to a gap, and skip the gap if necessary.
2) A source with a gap at the beginning of the stream is loaded programatically while the player
is in a playing state. To catch this case, it's important that our one-time play listener is setup
even if the player is in a playing state
*/
this.tech_.one('play', playHandler);
// Define the dispose function to clean up our events
this.dispose = () => {
this.clearSeekingAppendCheck_();
this.logger_('dispose');
this.tech_.off('waiting', waitingHandler);
this.tech_.off(timerCancelEvents, cancelTimerHandler);
this.tech_.off('canplay', canPlayHandler);
this.tech_.off('play', playHandler);
this.tech_.off('seeking', this.watchForBadSeeking_);
this.tech_.off('seeked', this.clearSeekingAppendCheck_);
loaderTypes.forEach((type) => {
pc[`${type}SegmentLoader_`].off('appendsdone', loaderChecks[type].updateend);
pc[`${type}SegmentLoader_`].off('playlistupdate', loaderChecks[type].reset);
this.tech_.off(['seeked', 'seeking'], loaderChecks[type].reset);
});
if (this.checkCurrentTimeTimeout_) {
window.clearTimeout(this.checkCurrentTimeTimeout_);
}
this.resetTimeUpdate_();
};
}
/**
* Periodically check current time to see if playback stopped
*
* @private
*/
monitorCurrentTime_() {
this.checkCurrentTime_();
if (this.checkCurrentTimeTimeout_) {
window.clearTimeout(this.checkCurrentTimeTimeout_);
}
// 42 = 24 fps // 250 is what Webkit uses // FF uses 15
this.checkCurrentTimeTimeout_ =
window.setTimeout(this.monitorCurrentTime_.bind(this), 250);
}
/**
* Reset stalled download stats for a specific type of loader
*
* @param {string} type
* The segment loader type to check.
*
* @listens SegmentLoader#playlistupdate
* @listens Tech#seeking
* @listens Tech#seeked
*/
resetSegmentDownloads_(type) {
const loader = this.playlistController_[`${type}SegmentLoader_`];
if (this[`${type}StalledDownloads_`] > 0) {
this.logger_(`resetting possible stalled download count for ${type} loader`);
}
this[`${type}StalledDownloads_`] = 0;
this[`${type}Buffered_`] = loader.buffered_();
}
/**
* Checks on every segment `appendsdone` to see
* if segment appends are making progress. If they are not
* and we are still downloading bytes. We exclude the playlist.
*
* @param {string} type
* The segment loader type to check.
*
* @listens SegmentLoader#appendsdone
*/
checkSegmentDownloads_(type) {
const pc = this.playlistController_;
const loader = pc[`${type}SegmentLoader_`];
const buffered = loader.buffered_();
const isBufferedDifferent = Ranges.isRangeDifferent(this[`${type}Buffered_`], buffered);
this[`${type}Buffered_`] = buffered;
// if another watcher is going to fix the issue or
// the buffered value for this loader changed
// appends are working
if (isBufferedDifferent) {
const metadata = {
bufferedRanges: buffered
};
pc.trigger({ type: 'bufferedrangeschanged', metadata });
this.resetSegmentDownloads_(type);
return;
}
this[`${type}StalledDownloads_`]++;
this.logger_(`found #${this[`${type}StalledDownloads_`]} ${type} appends that did not increase buffer (possible stalled download)`, {
playlistId: loader.playlist_ && loader.playlist_.id,
buffered: Ranges.timeRangesToArray(buffered)
});
// after 10 possibly stalled appends with no reset, exclude
if (this[`${type}StalledDownloads_`] < 10) {
return;
}
this.logger_(`${type} loader stalled download exclusion`);
this.resetSegmentDownloads_(type);
this.tech_.trigger({type: 'usage', name: `vhs-${type}-download-exclusion`});
if (type === 'subtitle') {
return;
}
// TODO: should we exclude audio tracks rather than main tracks
// when type is audio?
pc.excludePlaylist({
error: { message: `Excessive ${type} segment downloading detected.` },
playlistExclusionDuration: Infinity
});
}
/**
* The purpose of this function is to emulate the "waiting" event on
* browsers that do not emit it when they are waiting for more
* data to continue playback
*
* @private
*/
checkCurrentTime_() {
if (this.tech_.paused() || this.tech_.seeking()) {
return;
}
const currentTime = this.tech_.currentTime();
const buffered = this.tech_.buffered();
if (this.lastRecordedTime === currentTime &&
(!buffered.length ||
currentTime + Ranges.SAFE_TIME_DELTA >= buffered.end(buffered.length - 1))) {
// If current time is at the end of the final buffered region, then any playback
// stall is most likely caused by buffering in a low bandwidth environment. The tech
// should fire a `waiting` event in this scenario, but due to browser and tech
// inconsistencies. Calling `techWaiting_` here allows us to simulate
// responding to a native `waiting` event when the tech fails to emit one.
return this.techWaiting_();
}
if (this.consecutiveUpdates >= 5 &&
currentTime === this.lastRecordedTime) {
this.consecutiveUpdates++;
this.waiting_();
} else if (currentTime === this.lastRecordedTime) {
this.consecutiveUpdates++;
} else {
this.playedRanges_.push(createTimeRanges([this.lastRecordedTime, currentTime]));
const metadata = {
playedRanges: this.playedRanges_
};
this.playlistController_.trigger({ type: 'playedrangeschanged', metadata });
this.consecutiveUpdates = 0;
this.lastRecordedTime = currentTime;
}
}
/**
* Resets the 'timeupdate' mechanism designed to detect that we are stalled
*
* @private
*/
resetTimeUpdate_() {
this.consecutiveUpdates = 0;
}
/**
* Fixes situations where there's a bad seek
*
* @return {boolean} whether an action was taken to fix the seek
* @private
*/
fixesBadSeeks_() {
const seeking = this.tech_.seeking();
if (!seeking) {
return false;
}
// TODO: It's possible that these seekable checks should be moved out of this function
// and into a function that runs on seekablechange. It's also possible that we only need
// afterSeekableWindow as the buffered check at the bottom is good enough to handle before
// seekable range.
const seekable = this.seekable();
const currentTime = this.tech_.currentTime();
const isAfterSeekableRange = this.afterSeekableWindow_(
seekable,
currentTime,
this.media(),
this.allowSeeksWithinUnsafeLiveWindow
);
let seekTo;
if (isAfterSeekableRange) {
const seekableEnd = seekable.end(seekable.length - 1);
// sync to live point (if VOD, our seekable was updated and we're simply adjusting)
seekTo = seekableEnd;
}
if (this.beforeSeekableWindow_(seekable, currentTime)) {
const seekableStart = seekable.start(0);
// sync to the beginning of the live window
// provide a buffer of .1 seconds to handle rounding/imprecise numbers
seekTo = seekableStart +
// if the playlist is too short and the seekable range is an exact time (can
// happen in live with a 3 segment playlist), then don't use a time delta
(seekableStart === seekable.end(0) ? 0 : Ranges.SAFE_TIME_DELTA);
}
if (typeof seekTo !== 'undefined') {
this.logger_(`Trying to seek outside of seekable at time ${currentTime} with ` +
`seekable range ${Ranges.printableRange(seekable)}. Seeking to ` +
`${seekTo}.`);
this.tech_.setCurrentTime(seekTo);
return true;
}
const sourceUpdater = this.playlistController_.sourceUpdater_;
const buffered = this.tech_.buffered();
const audioBuffered = sourceUpdater.audioBuffer ? sourceUpdater.audioBuffered() : null;
const videoBuffered = sourceUpdater.videoBuffer ? sourceUpdater.videoBuffered() : null;
const media = this.media();
// verify that at least two segment durations or one part duration have been
// appended before checking for a gap.
const minAppendedDuration = media.partTargetDuration ? media.partTargetDuration :
(media.targetDuration - Ranges.TIME_FUDGE_FACTOR) * 2;
// verify that at least two segment durations have been
// appended before checking for a gap.
const bufferedToCheck = [audioBuffered, videoBuffered];
for (let i = 0; i < bufferedToCheck.length; i++) {
// skip null buffered
if (!bufferedToCheck[i]) {
continue;
}
const timeAhead = Ranges.timeAheadOf(bufferedToCheck[i], currentTime);
// if we are less than two video/audio segment durations or one part
// duration behind we haven't appended enough to call this a bad seek.
if (timeAhead < minAppendedDuration) {
return false;
}
}
const nextRange = Ranges.findNextRange(buffered, currentTime);
// we have appended enough content, but we don't have anything buffered
// to seek over the gap
if (nextRange.length === 0) {
return false;
}
seekTo = nextRange.start(0) + Ranges.SAFE_TIME_DELTA;
this.logger_(`Buffered region starts (${nextRange.start(0)}) ` +
` just beyond seek point (${currentTime}). Seeking to ${seekTo}.`);
this.tech_.setCurrentTime(seekTo);
return true;
}
/**
* Handler for situations when we determine the player is waiting.
*
* @private
*/
waiting_() {
if (this.techWaiting_()) {
return;
}
// All tech waiting checks failed. Use last resort correction
const currentTime = this.tech_.currentTime();
const buffered = this.tech_.buffered();
const currentRange = Ranges.findRange(buffered, currentTime);
// Sometimes the player can stall for unknown reasons within a contiguous buffered
// region with no indication that anything is amiss (seen in Firefox). Seeking to
// currentTime is usually enough to kickstart the player. This checks that the player
// is currently within a buffered region before attempting a corrective seek.
// Chrome does not appear to continue `timeupdate` events after a `waiting` event
// until there is ~ 3 seconds of forward buffer available. PlaybackWatcher should also
// make sure there is ~3 seconds of forward buffer before taking any corrective action
// to avoid triggering an `unknownwaiting` event when the network is slow.
if (currentRange.length && currentTime + 3 <= currentRange.end(0)) {
this.resetTimeUpdate_();
this.tech_.setCurrentTime(currentTime);
this.logger_(`Stopped at ${currentTime} while inside a buffered region ` +
`[${currentRange.start(0)} -> ${currentRange.end(0)}]. Attempting to resume ` +
'playback by seeking to the current time.');
// unknown waiting corrections may be useful for monitoring QoS
this.tech_.trigger({type: 'usage', name: 'vhs-unknown-waiting'});
return;
}
}
/**
* Handler for situations when the tech fires a `waiting` event
*
* @return {boolean}
* True if an action (or none) was needed to correct the waiting. False if no
* checks passed
* @private
*/
techWaiting_() {
const seekable = this.seekable();
const currentTime = this.tech_.currentTime();
if (this.tech_.seeking()) {
// Tech is seeking or already waiting on another action, no action needed
return true;
}
if (this.beforeSeekableWindow_(seekable, currentTime)) {
const livePoint = seekable.end(seekable.length - 1);
this.logger_(`Fell out of live window at time ${currentTime}. Seeking to ` +
`live point (seekable end) ${livePoint}`);
this.resetTimeUpdate_();
this.tech_.setCurrentTime(livePoint);
// live window resyncs may be useful for monitoring QoS
this.tech_.trigger({type: 'usage', name: 'vhs-live-resync'});
return true;
}
const sourceUpdater = this.tech_.vhs.playlistController_.sourceUpdater_;
const buffered = this.tech_.buffered();
const videoUnderflow = this.videoUnderflow_({
audioBuffered: sourceUpdater.audioBuffered(),
videoBuffered: sourceUpdater.videoBuffered(),
currentTime
});
if (videoUnderflow) {
// Even though the video underflowed and was stuck in a gap, the audio overplayed
// the gap, leading currentTime into a buffered range. Seeking to currentTime
// allows the video to catch up to the audio position without losing any audio
// (only suffering ~3 seconds of frozen video and a pause in audio playback).
this.resetTimeUpdate_();
this.tech_.setCurrentTime(currentTime);
// video underflow may be useful for monitoring QoS
this.tech_.trigger({type: 'usage', name: 'vhs-video-underflow'});
return true;
}
const nextRange = Ranges.findNextRange(buffered, currentTime);
// check for gap
if (nextRange.length > 0) {
this.logger_(`Stopped at ${currentTime} and seeking to ${nextRange.start(0)}`);
this.resetTimeUpdate_();
this.skipTheGap_(currentTime);
return true;
}
// All checks failed. Returning false to indicate failure to correct waiting
return false;
}
afterSeekableWindow_(seekable, currentTime, playlist, allowSeeksWithinUnsafeLiveWindow = false) {
if (!seekable.length) {
// we can't make a solid case if there's no seekable, default to false
return false;
}
let allowedEnd = seekable.end(seekable.length - 1) + Ranges.SAFE_TIME_DELTA;
const isLive = !playlist.endList;
const isLLHLS = typeof playlist.partTargetDuration === 'number';
if (isLive && (isLLHLS || allowSeeksWithinUnsafeLiveWindow)) {
allowedEnd = seekable.end(seekable.length - 1) + (playlist.targetDuration * 3);
}
if (currentTime > allowedEnd) {
return true;
}
return false;
}
beforeSeekableWindow_(seekable, currentTime) {
if (seekable.length &&
// can't fall before 0 and 0 seekable start identifies VOD stream
seekable.start(0) > 0 &&
currentTime < seekable.start(0) - this.liveRangeSafeTimeDelta) {
return true;
}
return false;
}
videoUnderflow_({videoBuffered, audioBuffered, currentTime}) {
// audio only content will not have video underflow :)
if (!videoBuffered) {
return;
}
let gap;
// find a gap in demuxed content.
if (videoBuffered.length && audioBuffered.length) {
// in Chrome audio will continue to play for ~3s when we run out of video
// so we have to check that the video buffer did have some buffer in the
// past.
const lastVideoRange = Ranges.findRange(videoBuffered, currentTime - 3);
const videoRange = Ranges.findRange(videoBuffered, currentTime);
const audioRange = Ranges.findRange(audioBuffered, currentTime);
if (audioRange.length && !videoRange.length && lastVideoRange.length) {
gap = {start: lastVideoRange.end(0), end: audioRange.end(0)};
}
// find a gap in muxed content.
} else {
const nextRange = Ranges.findNextRange(videoBuffered, currentTime);
// Even if there is no available next range, there is still a possibility we are
// stuck in a gap due to video underflow.
if (!nextRange.length) {
gap = this.gapFromVideoUnderflow_(videoBuffered, currentTime);
}
}
if (gap) {
this.logger_(`Encountered a gap in video from ${gap.start} to ${gap.end}. ` +
`Seeking to current time ${currentTime}`);
return true;
}
return false;
}
/**
* Timer callback. If playback still has not proceeded, then we seek
* to the start of the next buffered region.
*
* @private
*/
skipTheGap_(scheduledCurrentTime) {
const buffered = this.tech_.buffered();
const currentTime = this.tech_.currentTime();
const nextRange = Ranges.findNextRange(buffered, currentTime);
this.resetTimeUpdate_();
if (nextRange.length === 0 ||
currentTime !== scheduledCurrentTime) {
return;
}
this.logger_(
'skipTheGap_:',
'currentTime:', currentTime,
'scheduled currentTime:', scheduledCurrentTime,
'nextRange start:', nextRange.start(0)
);
// only seek if we still have not played
this.tech_.setCurrentTime(nextRange.start(0) + Ranges.TIME_FUDGE_FACTOR);
const metadata = {
gapInfo: {
from: currentTime,
to: nextRange.start(0)
}
};
this.playlistController_.trigger({type: 'gapjumped', metadata});
this.tech_.trigger({type: 'usage', name: 'vhs-gap-skip'});
}
gapFromVideoUnderflow_(buffered, currentTime) {
// At least in Chrome, if there is a gap in the video buffer, the audio will continue
// playing for ~3 seconds after the video gap starts. This is done to account for
// video buffer underflow/underrun (note that this is not done when there is audio
// buffer underflow/underrun -- in that case the video will stop as soon as it
// encounters the gap, as audio stalls are more noticeable/jarring to a user than
// video stalls). The player's time will reflect the playthrough of audio, so the
// time will appear as if we are in a buffered region, even if we are stuck in a
// "gap."
//
// Example:
// video buffer: 0 => 10.1, 10.2 => 20
// audio buffer: 0 => 20
// overall buffer: 0 => 10.1, 10.2 => 20
// current time: 13
//
// Chrome's video froze at 10 seconds, where the video buffer encountered the gap,
// however, the audio continued playing until it reached ~3 seconds past the gap
// (13 seconds), at which point it stops as well. Since current time is past the
// gap, findNextRange will return no ranges.
//
// To check for this issue, we see if there is a gap that starts somewhere within
// a 3 second range (3 seconds +/- 1 second) back from our current time.
const gaps = Ranges.findGaps(buffered);
for (let i = 0; i < gaps.length; i++) {
const start = gaps.start(i);
const end = gaps.end(i);
// gap is starts no more than 4 seconds back
if (currentTime - start < 4 && currentTime - start > 2) {
return {
start,
end
};
}
}
return null;
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,588 @@
import window from 'global/window';
import Config from './config';
import Playlist from './playlist';
import { codecsForPlaylist } from './util/codecs.js';
import logger from './util/logger';
const logFn = logger('PlaylistSelector');
const representationToString = function(representation) {
if (!representation || !representation.playlist) {
return;
}
const playlist = representation.playlist;
return JSON.stringify({
id: playlist.id,
bandwidth: representation.bandwidth,
width: representation.width,
height: representation.height,
codecs: playlist.attributes && playlist.attributes.CODECS || ''
});
};
// Utilities
/**
* Returns the CSS value for the specified property on an element
* using `getComputedStyle`. Firefox has a long-standing issue where
* getComputedStyle() may return null when running in an iframe with
* `display: none`.
*
* @see https://bugzilla.mozilla.org/show_bug.cgi?id=548397
* @param {HTMLElement} el the htmlelement to work on
* @param {string} the proprety to get the style for
*/
const safeGetComputedStyle = function(el, property) {
if (!el) {
return '';
}
const result = window.getComputedStyle(el);
if (!result) {
return '';
}
return result[property];
};
/**
* Resuable stable sort function
*
* @param {Playlists} array
* @param {Function} sortFn Different comparators
* @function stableSort
*/
const stableSort = function(array, sortFn) {
const newArray = array.slice();
array.sort(function(left, right) {
const cmp = sortFn(left, right);
if (cmp === 0) {
return newArray.indexOf(left) - newArray.indexOf(right);
}
return cmp;
});
};
/**
* A comparator function to sort two playlist object by bandwidth.
*
* @param {Object} left a media playlist object
* @param {Object} right a media playlist object
* @return {number} Greater than zero if the bandwidth attribute of
* left is greater than the corresponding attribute of right. Less
* than zero if the bandwidth of right is greater than left and
* exactly zero if the two are equal.
*/
export const comparePlaylistBandwidth = function(left, right) {
let leftBandwidth;
let rightBandwidth;
if (left.attributes.BANDWIDTH) {
leftBandwidth = left.attributes.BANDWIDTH;
}
leftBandwidth = leftBandwidth || window.Number.MAX_VALUE;
if (right.attributes.BANDWIDTH) {
rightBandwidth = right.attributes.BANDWIDTH;
}
rightBandwidth = rightBandwidth || window.Number.MAX_VALUE;
return leftBandwidth - rightBandwidth;
};
/**
* A comparator function to sort two playlist object by resolution (width).
*
* @param {Object} left a media playlist object
* @param {Object} right a media playlist object
* @return {number} Greater than zero if the resolution.width attribute of
* left is greater than the corresponding attribute of right. Less
* than zero if the resolution.width of right is greater than left and
* exactly zero if the two are equal.
*/
export const comparePlaylistResolution = function(left, right) {
let leftWidth;
let rightWidth;
if (left.attributes.RESOLUTION &&
left.attributes.RESOLUTION.width) {
leftWidth = left.attributes.RESOLUTION.width;
}
leftWidth = leftWidth || window.Number.MAX_VALUE;
if (right.attributes.RESOLUTION &&
right.attributes.RESOLUTION.width) {
rightWidth = right.attributes.RESOLUTION.width;
}
rightWidth = rightWidth || window.Number.MAX_VALUE;
// NOTE - Fallback to bandwidth sort as appropriate in cases where multiple renditions
// have the same media dimensions/ resolution
if (leftWidth === rightWidth &&
left.attributes.BANDWIDTH &&
right.attributes.BANDWIDTH) {
return left.attributes.BANDWIDTH - right.attributes.BANDWIDTH;
}
return leftWidth - rightWidth;
};
/**
* Chooses the appropriate media playlist based on bandwidth and player size
*
* @param {Object} settings
* Object of information required to use this selector
* @param {Object} settings.main
* Object representation of the main manifest
* @param {number} settings.bandwidth
* Current calculated bandwidth of the player
* @param {number} settings.playerWidth
* Current width of the player element (should account for the device pixel ratio)
* @param {number} settings.playerHeight
* Current height of the player element (should account for the device pixel ratio)
* @param {number} settings.playerObjectFit
* Current value of the video element's object-fit CSS property. Allows taking into
* account that the video might be scaled up to cover the media element when selecting
* media playlists based on player size.
* @param {boolean} settings.limitRenditionByPlayerDimensions
* True if the player width and height should be used during the selection, false otherwise
* @param {Object} settings.playlistController
* the current playlistController object
* @return {Playlist} the highest bitrate playlist less than the
* currently detected bandwidth, accounting for some amount of
* bandwidth variance
*/
export let simpleSelector = function(settings) {
const {
main,
bandwidth: playerBandwidth,
playerWidth,
playerHeight,
playerObjectFit,
limitRenditionByPlayerDimensions,
playlistController
} = settings;
// If we end up getting called before `main` is available, exit early
if (!main) {
return;
}
const options = {
bandwidth: playerBandwidth,
width: playerWidth,
height: playerHeight,
limitRenditionByPlayerDimensions
};
let playlists = main.playlists;
// if playlist is audio only, select between currently active audio group playlists.
if (Playlist.isAudioOnly(main)) {
playlists = playlistController.getAudioTrackPlaylists_();
// add audioOnly to options so that we log audioOnly: true
// at the buttom of this function for debugging.
options.audioOnly = true;
}
// convert the playlists to an intermediary representation to make comparisons easier
let sortedPlaylistReps = playlists.map((playlist) => {
let bandwidth;
const width = playlist.attributes && playlist.attributes.RESOLUTION && playlist.attributes.RESOLUTION.width;
const height = playlist.attributes && playlist.attributes.RESOLUTION && playlist.attributes.RESOLUTION.height;
bandwidth = playlist.attributes && playlist.attributes.BANDWIDTH;
bandwidth = bandwidth || window.Number.MAX_VALUE;
return {
bandwidth,
width,
height,
playlist
};
});
stableSort(sortedPlaylistReps, (left, right) => left.bandwidth - right.bandwidth);
// filter out any playlists that have been excluded due to
// incompatible configurations
sortedPlaylistReps = sortedPlaylistReps.filter((rep) => !Playlist.isIncompatible(rep.playlist));
// filter out any playlists that have been disabled manually through the representations
// api or excluded temporarily due to playback errors.
let enabledPlaylistReps = sortedPlaylistReps.filter((rep) => Playlist.isEnabled(rep.playlist));
if (!enabledPlaylistReps.length) {
// if there are no enabled playlists, then they have all been excluded or disabled
// by the user through the representations api. In this case, ignore exclusion and
// fallback to what the user wants by using playlists the user has not disabled.
enabledPlaylistReps = sortedPlaylistReps.filter((rep) => !Playlist.isDisabled(rep.playlist));
}
// filter out any variant that has greater effective bitrate
// than the current estimated bandwidth
const bandwidthPlaylistReps = enabledPlaylistReps.filter((rep) => rep.bandwidth * Config.BANDWIDTH_VARIANCE < playerBandwidth);
let highestRemainingBandwidthRep =
bandwidthPlaylistReps[bandwidthPlaylistReps.length - 1];
// get all of the renditions with the same (highest) bandwidth
// and then taking the very first element
const bandwidthBestRep = bandwidthPlaylistReps.filter((rep) => rep.bandwidth === highestRemainingBandwidthRep.bandwidth)[0];
// if we're not going to limit renditions by player size, make an early decision.
if (limitRenditionByPlayerDimensions === false) {
const chosenRep = (
bandwidthBestRep ||
enabledPlaylistReps[0] ||
sortedPlaylistReps[0]
);
if (chosenRep && chosenRep.playlist) {
let type = 'sortedPlaylistReps';
if (bandwidthBestRep) {
type = 'bandwidthBestRep';
}
if (enabledPlaylistReps[0]) {
type = 'enabledPlaylistReps';
}
logFn(`choosing ${representationToString(chosenRep)} using ${type} with options`, options);
return chosenRep.playlist;
}
logFn('could not choose a playlist with options', options);
return null;
}
// filter out playlists without resolution information
const haveResolution = bandwidthPlaylistReps.filter((rep) => rep.width && rep.height);
// sort variants by resolution
stableSort(haveResolution, (left, right) => left.width - right.width);
// if we have the exact resolution as the player use it
const resolutionBestRepList = haveResolution.filter((rep) => rep.width === playerWidth && rep.height === playerHeight);
highestRemainingBandwidthRep = resolutionBestRepList[resolutionBestRepList.length - 1];
// ensure that we pick the highest bandwidth variant that have exact resolution
const resolutionBestRep = resolutionBestRepList.filter((rep) => rep.bandwidth === highestRemainingBandwidthRep.bandwidth)[0];
let resolutionPlusOneList;
let resolutionPlusOneSmallest;
let resolutionPlusOneRep;
// find the smallest variant that is larger than the player
// if there is no match of exact resolution
if (!resolutionBestRep) {
resolutionPlusOneList = haveResolution.filter((rep) => {
if (playerObjectFit === 'cover') {
// video will be scaled up to cover the player. We need to
// make sure rendition is at least as wide and as high as the
// player.
return rep.width > playerWidth && rep.height > playerHeight;
}
// video will be scaled down to fit inside the player soon as
// its resolution exceeds player size in at least one dimension.
return rep.width > playerWidth || rep.height > playerHeight;
});
// find all the variants have the same smallest resolution
resolutionPlusOneSmallest = resolutionPlusOneList.filter((rep) => rep.width === resolutionPlusOneList[0].width &&
rep.height === resolutionPlusOneList[0].height);
// ensure that we also pick the highest bandwidth variant that
// is just-larger-than the video player
highestRemainingBandwidthRep =
resolutionPlusOneSmallest[resolutionPlusOneSmallest.length - 1];
resolutionPlusOneRep = resolutionPlusOneSmallest.filter((rep) => rep.bandwidth === highestRemainingBandwidthRep.bandwidth)[0];
}
let leastPixelDiffRep;
// If this selector proves to be better than others,
// resolutionPlusOneRep and resolutionBestRep and all
// the code involving them should be removed.
if (playlistController.leastPixelDiffSelector) {
// find the variant that is closest to the player's pixel size
const leastPixelDiffList = haveResolution.map((rep) => {
rep.pixelDiff = Math.abs(rep.width - playerWidth) + Math.abs(rep.height - playerHeight);
return rep;
});
// get the highest bandwidth, closest resolution playlist
stableSort(leastPixelDiffList, (left, right) => {
// sort by highest bandwidth if pixelDiff is the same
if (left.pixelDiff === right.pixelDiff) {
return right.bandwidth - left.bandwidth;
}
return left.pixelDiff - right.pixelDiff;
});
leastPixelDiffRep = leastPixelDiffList[0];
}
// fallback chain of variants
const chosenRep = (
leastPixelDiffRep ||
resolutionPlusOneRep ||
resolutionBestRep ||
bandwidthBestRep ||
enabledPlaylistReps[0] ||
sortedPlaylistReps[0]
);
if (chosenRep && chosenRep.playlist) {
let type = 'sortedPlaylistReps';
if (leastPixelDiffRep) {
type = 'leastPixelDiffRep';
} else if (resolutionPlusOneRep) {
type = 'resolutionPlusOneRep';
} else if (resolutionBestRep) {
type = 'resolutionBestRep';
} else if (bandwidthBestRep) {
type = 'bandwidthBestRep';
} else if (enabledPlaylistReps[0]) {
type = 'enabledPlaylistReps';
}
logFn(`choosing ${representationToString(chosenRep)} using ${type} with options`, options);
return chosenRep.playlist;
}
logFn('could not choose a playlist with options', options);
return null;
};
export const TEST_ONLY_SIMPLE_SELECTOR = (newSimpleSelector) => {
const oldSimpleSelector = simpleSelector;
simpleSelector = newSimpleSelector;
return function resetSimpleSelector() {
simpleSelector = oldSimpleSelector;
};
};
// Playlist Selectors
/**
* Chooses the appropriate media playlist based on the most recent
* bandwidth estimate and the player size.
*
* Expects to be called within the context of an instance of VhsHandler
*
* @return {Playlist} the highest bitrate playlist less than the
* currently detected bandwidth, accounting for some amount of
* bandwidth variance
*/
export const lastBandwidthSelector = function() {
let pixelRatio = this.useDevicePixelRatio ? window.devicePixelRatio || 1 : 1;
if (!isNaN(this.customPixelRatio)) {
pixelRatio = this.customPixelRatio;
}
return simpleSelector({
main: this.playlists.main,
bandwidth: this.systemBandwidth,
playerWidth: parseInt(safeGetComputedStyle(this.tech_.el(), 'width'), 10) * pixelRatio,
playerHeight: parseInt(safeGetComputedStyle(this.tech_.el(), 'height'), 10) * pixelRatio,
playerObjectFit: this.usePlayerObjectFit ? safeGetComputedStyle(this.tech_.el(), 'objectFit') : '',
limitRenditionByPlayerDimensions: this.limitRenditionByPlayerDimensions,
playlistController: this.playlistController_
});
};
/**
* Chooses the appropriate media playlist based on an
* exponential-weighted moving average of the bandwidth after
* filtering for player size.
*
* Expects to be called within the context of an instance of VhsHandler
*
* @param {number} decay - a number between 0 and 1. Higher values of
* this parameter will cause previous bandwidth estimates to lose
* significance more quickly.
* @return {Function} a function which can be invoked to create a new
* playlist selector function.
* @see https://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average
*/
export const movingAverageBandwidthSelector = function(decay) {
let average = -1;
let lastSystemBandwidth = -1;
if (decay < 0 || decay > 1) {
throw new Error('Moving average bandwidth decay must be between 0 and 1.');
}
return function() {
let pixelRatio = this.useDevicePixelRatio ? window.devicePixelRatio || 1 : 1;
if (!isNaN(this.customPixelRatio)) {
pixelRatio = this.customPixelRatio;
}
if (average < 0) {
average = this.systemBandwidth;
lastSystemBandwidth = this.systemBandwidth;
}
// stop the average value from decaying for every 250ms
// when the systemBandwidth is constant
// and
// stop average from setting to a very low value when the
// systemBandwidth becomes 0 in case of chunk cancellation
if (this.systemBandwidth > 0 && this.systemBandwidth !== lastSystemBandwidth) {
average = decay * this.systemBandwidth + (1 - decay) * average;
lastSystemBandwidth = this.systemBandwidth;
}
return simpleSelector({
main: this.playlists.main,
bandwidth: average,
playerWidth: parseInt(safeGetComputedStyle(this.tech_.el(), 'width'), 10) * pixelRatio,
playerHeight: parseInt(safeGetComputedStyle(this.tech_.el(), 'height'), 10) * pixelRatio,
playerObjectFit: this.usePlayerObjectFit ? safeGetComputedStyle(this.tech_.el(), 'objectFit') : '',
limitRenditionByPlayerDimensions: this.limitRenditionByPlayerDimensions,
playlistController: this.playlistController_
});
};
};
/**
* Chooses the appropriate media playlist based on the potential to rebuffer
*
* @param {Object} settings
* Object of information required to use this selector
* @param {Object} settings.main
* Object representation of the main manifest
* @param {number} settings.currentTime
* The current time of the player
* @param {number} settings.bandwidth
* Current measured bandwidth
* @param {number} settings.duration
* Duration of the media
* @param {number} settings.segmentDuration
* Segment duration to be used in round trip time calculations
* @param {number} settings.timeUntilRebuffer
* Time left in seconds until the player has to rebuffer
* @param {number} settings.currentTimeline
* The current timeline segments are being loaded from
* @param {SyncController} settings.syncController
* SyncController for determining if we have a sync point for a given playlist
* @return {Object|null}
* {Object} return.playlist
* The highest bandwidth playlist with the least amount of rebuffering
* {Number} return.rebufferingImpact
* The amount of time in seconds switching to this playlist will rebuffer. A
* negative value means that switching will cause zero rebuffering.
*/
export const minRebufferMaxBandwidthSelector = function(settings) {
const {
main,
currentTime,
bandwidth,
duration,
segmentDuration,
timeUntilRebuffer,
currentTimeline,
syncController
} = settings;
// filter out any playlists that have been excluded due to
// incompatible configurations
const compatiblePlaylists = main.playlists.filter(playlist => !Playlist.isIncompatible(playlist));
// filter out any playlists that have been disabled manually through the representations
// api or excluded temporarily due to playback errors.
let enabledPlaylists = compatiblePlaylists.filter(Playlist.isEnabled);
if (!enabledPlaylists.length) {
// if there are no enabled playlists, then they have all been excluded or disabled
// by the user through the representations api. In this case, ignore exclusion and
// fallback to what the user wants by using playlists the user has not disabled.
enabledPlaylists = compatiblePlaylists.filter(playlist => !Playlist.isDisabled(playlist));
}
const bandwidthPlaylists =
enabledPlaylists.filter(Playlist.hasAttribute.bind(null, 'BANDWIDTH'));
const rebufferingEstimates = bandwidthPlaylists.map((playlist) => {
const syncPoint = syncController.getSyncPoint(
playlist,
duration,
currentTimeline,
currentTime
);
// If there is no sync point for this playlist, switching to it will require a
// sync request first. This will double the request time
const numRequests = syncPoint ? 1 : 2;
const requestTimeEstimate = Playlist.estimateSegmentRequestTime(
segmentDuration,
bandwidth,
playlist
);
const rebufferingImpact = (requestTimeEstimate * numRequests) - timeUntilRebuffer;
return {
playlist,
rebufferingImpact
};
});
const noRebufferingPlaylists = rebufferingEstimates.filter((estimate) => estimate.rebufferingImpact <= 0);
// Sort by bandwidth DESC
stableSort(
noRebufferingPlaylists,
(a, b) => comparePlaylistBandwidth(b.playlist, a.playlist)
);
if (noRebufferingPlaylists.length) {
return noRebufferingPlaylists[0];
}
stableSort(rebufferingEstimates, (a, b) => a.rebufferingImpact - b.rebufferingImpact);
return rebufferingEstimates[0] || null;
};
/**
* Chooses the appropriate media playlist, which in this case is the lowest bitrate
* one with video. If no renditions with video exist, return the lowest audio rendition.
*
* Expects to be called within the context of an instance of VhsHandler
*
* @return {Object|null}
* {Object} return.playlist
* The lowest bitrate playlist that contains a video codec. If no such rendition
* exists pick the lowest audio rendition.
*/
export const lowestBitrateCompatibleVariantSelector = function() {
// filter out any playlists that have been excluded due to
// incompatible configurations or playback errors
const playlists = this.playlists.main.playlists.filter(Playlist.isEnabled);
// Sort ascending by bitrate
stableSort(
playlists,
(a, b) => comparePlaylistBandwidth(a, b)
);
// Parse and assume that playlists with no video codec have no video
// (this is not necessarily true, although it is generally true).
//
// If an entire manifest has no valid videos everything will get filtered
// out.
const playlistsWithVideo = playlists.filter(playlist => !!codecsForPlaylist(this.playlists.main, playlist).video);
return playlistsWithVideo[0] || null;
};

View File

@ -0,0 +1,806 @@
/**
* @file playlist.js
*
* Playlist related utilities.
*/
import window from 'global/window';
import {isAudioCodec} from '@videojs/vhs-utils/es/codecs.js';
import {TIME_FUDGE_FACTOR} from './ranges.js';
import {createTimeRanges} from './util/vjs-compat';
/**
* Get the duration of a segment, with special cases for
* llhls segments that do not have a duration yet.
*
* @param {Object} playlist
* the playlist that the segment belongs to.
* @param {Object} segment
* the segment to get a duration for.
*
* @return {number}
* the segment duration
*/
export const segmentDurationWithParts = (playlist, segment) => {
// if this isn't a preload segment
// then we will have a segment duration that is accurate.
if (!segment.preload) {
return segment.duration;
}
// otherwise we have to add up parts and preload hints
// to get an up to date duration.
let result = 0;
(segment.parts || []).forEach(function(p) {
result += p.duration;
});
// for preload hints we have to use partTargetDuration
// as they won't even have a duration yet.
(segment.preloadHints || []).forEach(function(p) {
if (p.type === 'PART') {
result += playlist.partTargetDuration;
}
});
return result;
};
/**
* A function to get a combined list of parts and segments with durations
* and indexes.
*
* @param {Playlist} playlist the playlist to get the list for.
*
* @return {Array} The part/segment list.
*/
export const getPartsAndSegments = (playlist) => (playlist.segments || []).reduce((acc, segment, si) => {
if (segment.parts) {
segment.parts.forEach(function(part, pi) {
acc.push({duration: part.duration, segmentIndex: si, partIndex: pi, part, segment});
});
} else {
acc.push({duration: segment.duration, segmentIndex: si, partIndex: null, segment, part: null});
}
return acc;
}, []);
export const getLastParts = (media) => {
const lastSegment = media.segments && media.segments.length && media.segments[media.segments.length - 1];
return lastSegment && lastSegment.parts || [];
};
export const getKnownPartCount = ({preloadSegment}) => {
if (!preloadSegment) {
return;
}
const {parts, preloadHints} = preloadSegment;
let partCount = (preloadHints || [])
.reduce((count, hint) => count + (hint.type === 'PART' ? 1 : 0), 0);
partCount += (parts && parts.length) ? parts.length : 0;
return partCount;
};
/**
* Get the number of seconds to delay from the end of a
* live playlist.
*
* @param {Playlist} main the main playlist
* @param {Playlist} media the media playlist
* @return {number} the hold back in seconds.
*/
export const liveEdgeDelay = (main, media) => {
if (media.endList) {
return 0;
}
// dash suggestedPresentationDelay trumps everything
if (main && main.suggestedPresentationDelay) {
return main.suggestedPresentationDelay;
}
const hasParts = getLastParts(media).length > 0;
// look for "part" delays from ll-hls first
if (hasParts && media.serverControl && media.serverControl.partHoldBack) {
return media.serverControl.partHoldBack;
} else if (hasParts && media.partTargetDuration) {
return media.partTargetDuration * 3;
// finally look for full segment delays
} else if (media.serverControl && media.serverControl.holdBack) {
return media.serverControl.holdBack;
} else if (media.targetDuration) {
return media.targetDuration * 3;
}
return 0;
};
/**
* walk backward until we find a duration we can use
* or return a failure
*
* @param {Playlist} playlist the playlist to walk through
* @param {Number} endSequence the mediaSequence to stop walking on
*/
const backwardDuration = function(playlist, endSequence) {
let result = 0;
let i = endSequence - playlist.mediaSequence;
// if a start time is available for segment immediately following
// the interval, use it
let segment = playlist.segments[i];
// Walk backward until we find the latest segment with timeline
// information that is earlier than endSequence
if (segment) {
if (typeof segment.start !== 'undefined') {
return { result: segment.start, precise: true };
}
if (typeof segment.end !== 'undefined') {
return {
result: segment.end - segment.duration,
precise: true
};
}
}
while (i--) {
segment = playlist.segments[i];
if (typeof segment.end !== 'undefined') {
return { result: result + segment.end, precise: true };
}
result += segmentDurationWithParts(playlist, segment);
if (typeof segment.start !== 'undefined') {
return { result: result + segment.start, precise: true };
}
}
return { result, precise: false };
};
/**
* walk forward until we find a duration we can use
* or return a failure
*
* @param {Playlist} playlist the playlist to walk through
* @param {number} endSequence the mediaSequence to stop walking on
*/
const forwardDuration = function(playlist, endSequence) {
let result = 0;
let segment;
let i = endSequence - playlist.mediaSequence;
// Walk forward until we find the earliest segment with timeline
// information
for (; i < playlist.segments.length; i++) {
segment = playlist.segments[i];
if (typeof segment.start !== 'undefined') {
return {
result: segment.start - result,
precise: true
};
}
result += segmentDurationWithParts(playlist, segment);
if (typeof segment.end !== 'undefined') {
return {
result: segment.end - result,
precise: true
};
}
}
// indicate we didn't find a useful duration estimate
return { result: -1, precise: false };
};
/**
* Calculate the media duration from the segments associated with a
* playlist. The duration of a subinterval of the available segments
* may be calculated by specifying an end index.
*
* @param {Object} playlist a media playlist object
* @param {number=} endSequence an exclusive upper boundary
* for the playlist. Defaults to playlist length.
* @param {number} expired the amount of time that has dropped
* off the front of the playlist in a live scenario
* @return {number} the duration between the first available segment
* and end index.
*/
const intervalDuration = function(playlist, endSequence, expired) {
if (typeof endSequence === 'undefined') {
endSequence = playlist.mediaSequence + playlist.segments.length;
}
if (endSequence < playlist.mediaSequence) {
return 0;
}
// do a backward walk to estimate the duration
const backward = backwardDuration(playlist, endSequence);
if (backward.precise) {
// if we were able to base our duration estimate on timing
// information provided directly from the Media Source, return
// it
return backward.result;
}
// walk forward to see if a precise duration estimate can be made
// that way
const forward = forwardDuration(playlist, endSequence);
if (forward.precise) {
// we found a segment that has been buffered and so it's
// position is known precisely
return forward.result;
}
// return the less-precise, playlist-based duration estimate
return backward.result + expired;
};
/**
* Calculates the duration of a playlist. If a start and end index
* are specified, the duration will be for the subset of the media
* timeline between those two indices. The total duration for live
* playlists is always Infinity.
*
* @param {Object} playlist a media playlist object
* @param {number=} endSequence an exclusive upper
* boundary for the playlist. Defaults to the playlist media
* sequence number plus its length.
* @param {number=} expired the amount of time that has
* dropped off the front of the playlist in a live scenario
* @return {number} the duration between the start index and end
* index.
*/
export const duration = function(playlist, endSequence, expired) {
if (!playlist) {
return 0;
}
if (typeof expired !== 'number') {
expired = 0;
}
// if a slice of the total duration is not requested, use
// playlist-level duration indicators when they're present
if (typeof endSequence === 'undefined') {
// if present, use the duration specified in the playlist
if (playlist.totalDuration) {
return playlist.totalDuration;
}
// duration should be Infinity for live playlists
if (!playlist.endList) {
return window.Infinity;
}
}
// calculate the total duration based on the segment durations
return intervalDuration(
playlist,
endSequence,
expired
);
};
/**
* Calculate the time between two indexes in the current playlist
* neight the start- nor the end-index need to be within the current
* playlist in which case, the targetDuration of the playlist is used
* to approximate the durations of the segments
*
* @param {Array} options.durationList list to iterate over for durations.
* @param {number} options.defaultDuration duration to use for elements before or after the durationList
* @param {number} options.startIndex partsAndSegments index to start
* @param {number} options.endIndex partsAndSegments index to end.
* @return {number} the number of seconds between startIndex and endIndex
*/
export const sumDurations = function({defaultDuration, durationList, startIndex, endIndex}) {
let durations = 0;
if (startIndex > endIndex) {
[startIndex, endIndex] = [endIndex, startIndex];
}
if (startIndex < 0) {
for (let i = startIndex; i < Math.min(0, endIndex); i++) {
durations += defaultDuration;
}
startIndex = 0;
}
for (let i = startIndex; i < endIndex; i++) {
durations += durationList[i].duration;
}
return durations;
};
/**
* Calculates the playlist end time
*
* @param {Object} playlist a media playlist object
* @param {number=} expired the amount of time that has
* dropped off the front of the playlist in a live scenario
* @param {boolean|false} useSafeLiveEnd a boolean value indicating whether or not the
* playlist end calculation should consider the safe live end
* (truncate the playlist end by three segments). This is normally
* used for calculating the end of the playlist's seekable range.
* This takes into account the value of liveEdgePadding.
* Setting liveEdgePadding to 0 is equivalent to setting this to false.
* @param {number} liveEdgePadding a number indicating how far from the end of the playlist we should be in seconds.
* If this is provided, it is used in the safe live end calculation.
* Setting useSafeLiveEnd=false or liveEdgePadding=0 are equivalent.
* Corresponds to suggestedPresentationDelay in DASH manifests.
* @return {number} the end time of playlist
* @function playlistEnd
*/
export const playlistEnd = function(playlist, expired, useSafeLiveEnd, liveEdgePadding) {
if (!playlist || !playlist.segments) {
return null;
}
if (playlist.endList) {
return duration(playlist);
}
if (expired === null) {
return null;
}
expired = expired || 0;
let lastSegmentEndTime = intervalDuration(
playlist,
playlist.mediaSequence + playlist.segments.length,
expired
);
if (useSafeLiveEnd) {
liveEdgePadding = typeof liveEdgePadding === 'number' ? liveEdgePadding : liveEdgeDelay(null, playlist);
lastSegmentEndTime -= liveEdgePadding;
}
// don't return a time less than zero
return Math.max(0, lastSegmentEndTime);
};
/**
* Calculates the interval of time that is currently seekable in a
* playlist. The returned time ranges are relative to the earliest
* moment in the specified playlist that is still available. A full
* seekable implementation for live streams would need to offset
* these values by the duration of content that has expired from the
* stream.
*
* @param {Object} playlist a media playlist object
* dropped off the front of the playlist in a live scenario
* @param {number=} expired the amount of time that has
* dropped off the front of the playlist in a live scenario
* @param {number} liveEdgePadding how far from the end of the playlist we should be in seconds.
* Corresponds to suggestedPresentationDelay in DASH manifests.
* @return {TimeRanges} the periods of time that are valid targets
* for seeking
*/
export const seekable = function(playlist, expired, liveEdgePadding) {
const useSafeLiveEnd = true;
const seekableStart = expired || 0;
let seekableEnd = playlistEnd(playlist, expired, useSafeLiveEnd, liveEdgePadding);
if (seekableEnd === null) {
return createTimeRanges();
}
// Clamp seekable end since it can not be less than the seekable start
if (seekableEnd < seekableStart) {
seekableEnd = seekableStart;
}
return createTimeRanges(seekableStart, seekableEnd);
};
/**
* Determine the index and estimated starting time of the segment that
* contains a specified playback position in a media playlist.
*
* @param {Object} options.playlist the media playlist to query
* @param {number} options.currentTime The number of seconds since the earliest
* possible position to determine the containing segment for
* @param {number} options.startTime the time when the segment/part starts
* @param {number} options.startingSegmentIndex the segment index to start looking at.
* @param {number?} [options.startingPartIndex] the part index to look at within the segment.
*
* @return {Object} an object with partIndex, segmentIndex, and startTime.
*/
export const getMediaInfoForTime = function({
playlist,
currentTime,
startingSegmentIndex,
startingPartIndex,
startTime,
exactManifestTimings
}) {
let time = currentTime - startTime;
const partsAndSegments = getPartsAndSegments(playlist);
let startIndex = 0;
for (let i = 0; i < partsAndSegments.length; i++) {
const partAndSegment = partsAndSegments[i];
if (startingSegmentIndex !== partAndSegment.segmentIndex) {
continue;
}
// skip this if part index does not match.
if (typeof startingPartIndex === 'number' && typeof partAndSegment.partIndex === 'number' && startingPartIndex !== partAndSegment.partIndex) {
continue;
}
startIndex = i;
break;
}
if (time < 0) {
// Walk backward from startIndex in the playlist, adding durations
// until we find a segment that contains `time` and return it
if (startIndex > 0) {
for (let i = startIndex - 1; i >= 0; i--) {
const partAndSegment = partsAndSegments[i];
time += partAndSegment.duration;
if (exactManifestTimings) {
if (time < 0) {
continue;
}
} else if ((time + TIME_FUDGE_FACTOR) <= 0) {
continue;
}
return {
partIndex: partAndSegment.partIndex,
segmentIndex: partAndSegment.segmentIndex,
startTime: startTime - sumDurations({
defaultDuration: playlist.targetDuration,
durationList: partsAndSegments,
startIndex,
endIndex: i
})
};
}
}
// We were unable to find a good segment within the playlist
// so select the first segment
return {
partIndex: partsAndSegments[0] && partsAndSegments[0].partIndex || null,
segmentIndex: partsAndSegments[0] && partsAndSegments[0].segmentIndex || 0,
startTime: currentTime
};
}
// When startIndex is negative, we first walk forward to first segment
// adding target durations. If we "run out of time" before getting to
// the first segment, return the first segment
if (startIndex < 0) {
for (let i = startIndex; i < 0; i++) {
time -= playlist.targetDuration;
if (time < 0) {
return {
partIndex: partsAndSegments[0] && partsAndSegments[0].partIndex || null,
segmentIndex: partsAndSegments[0] && partsAndSegments[0].segmentIndex || 0,
startTime: currentTime
};
}
}
startIndex = 0;
}
// Walk forward from startIndex in the playlist, subtracting durations
// until we find a segment that contains `time` and return it
for (let i = startIndex; i < partsAndSegments.length; i++) {
const partAndSegment = partsAndSegments[i];
time -= partAndSegment.duration;
const canUseFudgeFactor = partAndSegment.duration > TIME_FUDGE_FACTOR;
const isExactlyAtTheEnd = time === 0;
const isExtremelyCloseToTheEnd = canUseFudgeFactor && (time + TIME_FUDGE_FACTOR >= 0);
if (isExactlyAtTheEnd || isExtremelyCloseToTheEnd) {
// 1) We are exactly at the end of the current segment.
// 2) We are extremely close to the end of the current segment (The difference is less than 1 / 30).
// We may encounter this situation when
// we don't have exact match between segment duration info in the manifest and the actual duration of the segment
// For example:
// We appended 3 segments 10 seconds each, meaning we should have 30 sec buffered,
// but we the actual buffered is 29.99999
//
// In both cases:
// if we passed current time -> it means that we already played current segment
// if we passed buffered.end -> it means that this segment is already loaded and buffered
// we should select the next segment if we have one:
if (i !== partsAndSegments.length - 1) {
continue;
}
}
if (exactManifestTimings) {
if (time > 0) {
continue;
}
} else if ((time - TIME_FUDGE_FACTOR) >= 0) {
continue;
}
return {
partIndex: partAndSegment.partIndex,
segmentIndex: partAndSegment.segmentIndex,
startTime: startTime + sumDurations({
defaultDuration: playlist.targetDuration,
durationList: partsAndSegments,
startIndex,
endIndex: i
})
};
}
// We are out of possible candidates so load the last one...
return {
segmentIndex: partsAndSegments[partsAndSegments.length - 1].segmentIndex,
partIndex: partsAndSegments[partsAndSegments.length - 1].partIndex,
startTime: currentTime
};
};
/**
* Check whether the playlist is excluded or not.
*
* @param {Object} playlist the media playlist object
* @return {boolean} whether the playlist is excluded or not
* @function isExcluded
*/
export const isExcluded = function(playlist) {
return playlist.excludeUntil && playlist.excludeUntil > Date.now();
};
/**
* Check whether the playlist is compatible with current playback configuration or has
* been excluded permanently for being incompatible.
*
* @param {Object} playlist the media playlist object
* @return {boolean} whether the playlist is incompatible or not
* @function isIncompatible
*/
export const isIncompatible = function(playlist) {
return playlist.excludeUntil && playlist.excludeUntil === Infinity;
};
/**
* Check whether the playlist is enabled or not.
*
* @param {Object} playlist the media playlist object
* @return {boolean} whether the playlist is enabled or not
* @function isEnabled
*/
export const isEnabled = function(playlist) {
const excluded = isExcluded(playlist);
return (!playlist.disabled && !excluded);
};
/**
* Check whether the playlist has been manually disabled through the representations api.
*
* @param {Object} playlist the media playlist object
* @return {boolean} whether the playlist is disabled manually or not
* @function isDisabled
*/
export const isDisabled = function(playlist) {
return playlist.disabled;
};
/**
* Returns whether the current playlist is an AES encrypted HLS stream
*
* @return {boolean} true if it's an AES encrypted HLS stream
*/
export const isAes = function(media) {
for (let i = 0; i < media.segments.length; i++) {
if (media.segments[i].key) {
return true;
}
}
return false;
};
/**
* Checks if the playlist has a value for the specified attribute
*
* @param {string} attr
* Attribute to check for
* @param {Object} playlist
* The media playlist object
* @return {boolean}
* Whether the playlist contains a value for the attribute or not
* @function hasAttribute
*/
export const hasAttribute = function(attr, playlist) {
return playlist.attributes && playlist.attributes[attr];
};
/**
* Estimates the time required to complete a segment download from the specified playlist
*
* @param {number} segmentDuration
* Duration of requested segment
* @param {number} bandwidth
* Current measured bandwidth of the player
* @param {Object} playlist
* The media playlist object
* @param {number=} bytesReceived
* Number of bytes already received for the request. Defaults to 0
* @return {number|NaN}
* The estimated time to request the segment. NaN if bandwidth information for
* the given playlist is unavailable
* @function estimateSegmentRequestTime
*/
export const estimateSegmentRequestTime = function(
segmentDuration,
bandwidth,
playlist,
bytesReceived = 0
) {
if (!hasAttribute('BANDWIDTH', playlist)) {
return NaN;
}
const size = segmentDuration * playlist.attributes.BANDWIDTH;
return (size - (bytesReceived * 8)) / bandwidth;
};
/*
* Returns whether the current playlist is the lowest rendition
*
* @return {Boolean} true if on lowest rendition
*/
export const isLowestEnabledRendition = (main, media) => {
if (main.playlists.length === 1) {
return true;
}
const currentBandwidth = media.attributes.BANDWIDTH || Number.MAX_VALUE;
return (main.playlists.filter((playlist) => {
if (!isEnabled(playlist)) {
return false;
}
return (playlist.attributes.BANDWIDTH || 0) < currentBandwidth;
}).length === 0);
};
export const playlistMatch = (a, b) => {
// both playlits are null
// or only one playlist is non-null
// no match
if (!a && !b || (!a && b) || (a && !b)) {
return false;
}
// playlist objects are the same, match
if (a === b) {
return true;
}
// first try to use id as it should be the most
// accurate
if (a.id && b.id && a.id === b.id) {
return true;
}
// next try to use reslovedUri as it should be the
// second most accurate.
if (a.resolvedUri && b.resolvedUri && a.resolvedUri === b.resolvedUri) {
return true;
}
// finally try to use uri as it should be accurate
// but might miss a few cases for relative uris
if (a.uri && b.uri && a.uri === b.uri) {
return true;
}
return false;
};
const someAudioVariant = function(main, callback) {
const AUDIO = main && main.mediaGroups && main.mediaGroups.AUDIO || {};
let found = false;
for (const groupName in AUDIO) {
for (const label in AUDIO[groupName]) {
found = callback(AUDIO[groupName][label]);
if (found) {
break;
}
}
if (found) {
break;
}
}
return !!found;
};
export const isAudioOnly = (main) => {
// we are audio only if we have no main playlists but do
// have media group playlists.
if (!main || !main.playlists || !main.playlists.length) {
// without audio variants or playlists this
// is not an audio only main.
const found = someAudioVariant(main, (variant) =>
(variant.playlists && variant.playlists.length) || variant.uri);
return found;
}
// if every playlist has only an audio codec it is audio only
for (let i = 0; i < main.playlists.length; i++) {
const playlist = main.playlists[i];
const CODECS = playlist.attributes && playlist.attributes.CODECS;
// all codecs are audio, this is an audio playlist.
if (CODECS && CODECS.split(',').every((c) => isAudioCodec(c))) {
continue;
}
// playlist is in an audio group it is audio only
const found = someAudioVariant(main, (variant) => playlistMatch(playlist, variant));
if (found) {
continue;
}
// if we make it here this playlist isn't audio and we
// are not audio only
return false;
}
// if we make it past every playlist without returning, then
// this is an audio only playlist.
return true;
};
// exports
export default {
liveEdgeDelay,
duration,
seekable,
getMediaInfoForTime,
isEnabled,
isDisabled,
isExcluded,
isIncompatible,
playlistEnd,
isAes,
hasAttribute,
estimateSegmentRequestTime,
isLowestEnabledRendition,
isAudioOnly,
playlistMatch,
segmentDurationWithParts
};

489
VApp/node_modules/@videojs/http-streaming/src/ranges.js generated vendored Normal file
View File

@ -0,0 +1,489 @@
/**
* ranges
*
* Utilities for working with TimeRanges.
*
*/
import {createTimeRanges} from './util/vjs-compat';
// Fudge factor to account for TimeRanges rounding
export const TIME_FUDGE_FACTOR = 1 / 30;
// Comparisons between time values such as current time and the end of the buffered range
// can be misleading because of precision differences or when the current media has poorly
// aligned audio and video, which can cause values to be slightly off from what you would
// expect. This value is what we consider to be safe to use in such comparisons to account
// for these scenarios.
export const SAFE_TIME_DELTA = TIME_FUDGE_FACTOR * 3;
/**
* Clamps a value to within a range
*
* @param {number} num - the value to clamp
* @param {number} start - the start of the range to clamp within, inclusive
* @param {number} end - the end of the range to clamp within, inclusive
* @return {number}
*/
const clamp = function(num, [start, end]) {
return Math.min(Math.max(start, num), end);
};
const filterRanges = function(timeRanges, predicate) {
const results = [];
let i;
if (timeRanges && timeRanges.length) {
// Search for ranges that match the predicate
for (i = 0; i < timeRanges.length; i++) {
if (predicate(timeRanges.start(i), timeRanges.end(i))) {
results.push([timeRanges.start(i), timeRanges.end(i)]);
}
}
}
return createTimeRanges(results);
};
/**
* Attempts to find the buffered TimeRange that contains the specified
* time.
*
* @param {TimeRanges} buffered - the TimeRanges object to query
* @param {number} time - the time to filter on.
* @return {TimeRanges} a new TimeRanges object
*/
export const findRange = function(buffered, time) {
return filterRanges(buffered, function(start, end) {
return start - SAFE_TIME_DELTA <= time &&
end + SAFE_TIME_DELTA >= time;
});
};
/**
* Returns the TimeRanges that begin later than the specified time.
*
* @param {TimeRanges} timeRanges - the TimeRanges object to query
* @param {number} time - the time to filter on.
* @return {TimeRanges} a new TimeRanges object.
*/
export const findNextRange = function(timeRanges, time) {
return filterRanges(timeRanges, function(start) {
return start - TIME_FUDGE_FACTOR >= time;
});
};
/**
* Returns gaps within a list of TimeRanges
*
* @param {TimeRanges} buffered - the TimeRanges object
* @return {TimeRanges} a TimeRanges object of gaps
*/
export const findGaps = function(buffered) {
if (buffered.length < 2) {
return createTimeRanges();
}
const ranges = [];
for (let i = 1; i < buffered.length; i++) {
const start = buffered.end(i - 1);
const end = buffered.start(i);
ranges.push([start, end]);
}
return createTimeRanges(ranges);
};
/**
* Search for a likely end time for the segment that was just appened
* based on the state of the `buffered` property before and after the
* append. If we fin only one such uncommon end-point return it.
*
* @param {TimeRanges} original - the buffered time ranges before the update
* @param {TimeRanges} update - the buffered time ranges after the update
* @return {number|null} the end time added between `original` and `update`,
* or null if one cannot be unambiguously determined.
*/
export const findSoleUncommonTimeRangesEnd = function(original, update) {
let i;
let start;
let end;
const result = [];
const edges = [];
// In order to qualify as a possible candidate, the end point must:
// 1) Not have already existed in the `original` ranges
// 2) Not result from the shrinking of a range that already existed
// in the `original` ranges
// 3) Not be contained inside of a range that existed in `original`
const overlapsCurrentEnd = function(span) {
return (span[0] <= end && span[1] >= end);
};
if (original) {
// Save all the edges in the `original` TimeRanges object
for (i = 0; i < original.length; i++) {
start = original.start(i);
end = original.end(i);
edges.push([start, end]);
}
}
if (update) {
// Save any end-points in `update` that are not in the `original`
// TimeRanges object
for (i = 0; i < update.length; i++) {
start = update.start(i);
end = update.end(i);
if (edges.some(overlapsCurrentEnd)) {
continue;
}
// at this point it must be a unique non-shrinking end edge
result.push(end);
}
}
// we err on the side of caution and return null if didn't find
// exactly *one* differing end edge in the search above
if (result.length !== 1) {
return null;
}
return result[0];
};
/**
* Calculate the intersection of two TimeRanges
*
* @param {TimeRanges} bufferA
* @param {TimeRanges} bufferB
* @return {TimeRanges} The interesection of `bufferA` with `bufferB`
*/
export const bufferIntersection = function(bufferA, bufferB) {
let start = null;
let end = null;
let arity = 0;
const extents = [];
const ranges = [];
if (!bufferA || !bufferA.length || !bufferB || !bufferB.length) {
return createTimeRanges();
}
// Handle the case where we have both buffers and create an
// intersection of the two
let count = bufferA.length;
// A) Gather up all start and end times
while (count--) {
extents.push({time: bufferA.start(count), type: 'start'});
extents.push({time: bufferA.end(count), type: 'end'});
}
count = bufferB.length;
while (count--) {
extents.push({time: bufferB.start(count), type: 'start'});
extents.push({time: bufferB.end(count), type: 'end'});
}
// B) Sort them by time
extents.sort(function(a, b) {
return a.time - b.time;
});
// C) Go along one by one incrementing arity for start and decrementing
// arity for ends
for (count = 0; count < extents.length; count++) {
if (extents[count].type === 'start') {
arity++;
// D) If arity is ever incremented to 2 we are entering an
// overlapping range
if (arity === 2) {
start = extents[count].time;
}
} else if (extents[count].type === 'end') {
arity--;
// E) If arity is ever decremented to 1 we leaving an
// overlapping range
if (arity === 1) {
end = extents[count].time;
}
}
// F) Record overlapping ranges
if (start !== null && end !== null) {
ranges.push([start, end]);
start = null;
end = null;
}
}
return createTimeRanges(ranges);
};
/**
* Calculates the percentage of `segmentRange` that overlaps the
* `buffered` time ranges.
*
* @param {TimeRanges} segmentRange - the time range that the segment
* covers adjusted according to currentTime
* @param {TimeRanges} referenceRange - the original time range that the
* segment covers
* @param {number} currentTime - time in seconds where the current playback
* is at
* @param {TimeRanges} buffered - the currently buffered time ranges
* @return {number} percent of the segment currently buffered
*/
const calculateBufferedPercent = function(
adjustedRange,
referenceRange,
currentTime,
buffered
) {
const referenceDuration = referenceRange.end(0) - referenceRange.start(0);
const adjustedDuration = adjustedRange.end(0) - adjustedRange.start(0);
const bufferMissingFromAdjusted = referenceDuration - adjustedDuration;
const adjustedIntersection = bufferIntersection(adjustedRange, buffered);
const referenceIntersection = bufferIntersection(referenceRange, buffered);
let adjustedOverlap = 0;
let referenceOverlap = 0;
let count = adjustedIntersection.length;
while (count--) {
adjustedOverlap += adjustedIntersection.end(count) -
adjustedIntersection.start(count);
// If the current overlap segment starts at currentTime, then increase the
// overlap duration so that it actually starts at the beginning of referenceRange
// by including the difference between the two Range's durations
// This is a work around for the way Flash has no buffer before currentTime
// TODO: see if this is still necessary since Flash isn't included
if (adjustedIntersection.start(count) === currentTime) {
adjustedOverlap += bufferMissingFromAdjusted;
}
}
count = referenceIntersection.length;
while (count--) {
referenceOverlap += referenceIntersection.end(count) -
referenceIntersection.start(count);
}
// Use whichever value is larger for the percentage-buffered since that value
// is likely more accurate because the only way
return Math.max(adjustedOverlap, referenceOverlap) / referenceDuration * 100;
};
/**
* Return the amount of a range specified by the startOfSegment and segmentDuration
* overlaps the current buffered content.
*
* @param {number} startOfSegment - the time where the segment begins
* @param {number} segmentDuration - the duration of the segment in seconds
* @param {number} currentTime - time in seconds where the current playback
* is at
* @param {TimeRanges} buffered - the state of the buffer
* @return {number} percentage of the segment's time range that is
* already in `buffered`
*/
export const getSegmentBufferedPercent = function(
startOfSegment,
segmentDuration,
currentTime,
buffered
) {
const endOfSegment = startOfSegment + segmentDuration;
// The entire time range of the segment
const originalSegmentRange = createTimeRanges([[
startOfSegment,
endOfSegment
]]);
// The adjusted segment time range that is setup such that it starts
// no earlier than currentTime
// Flash has no notion of a back-buffer so adjustedSegmentRange adjusts
// for that and the function will still return 100% if a only half of a
// segment is actually in the buffer as long as the currentTime is also
// half-way through the segment
const adjustedSegmentRange = createTimeRanges([[
clamp(startOfSegment, [currentTime, endOfSegment]),
endOfSegment
]]);
// This condition happens when the currentTime is beyond the segment's
// end time
if (adjustedSegmentRange.start(0) === adjustedSegmentRange.end(0)) {
return 0;
}
const percent = calculateBufferedPercent(
adjustedSegmentRange,
originalSegmentRange,
currentTime,
buffered
);
// If the segment is reported as having a zero duration, return 0%
// since it is likely that we will need to fetch the segment
if (isNaN(percent) || percent === Infinity || percent === -Infinity) {
return 0;
}
return percent;
};
/**
* Gets a human readable string for a TimeRange
*
* @param {TimeRange} range
* @return {string} a human readable string
*/
export const printableRange = (range) => {
const strArr = [];
if (!range || !range.length) {
return '';
}
for (let i = 0; i < range.length; i++) {
strArr.push(range.start(i) + ' => ' + range.end(i));
}
return strArr.join(', ');
};
/**
* Calculates the amount of time left in seconds until the player hits the end of the
* buffer and causes a rebuffer
*
* @param {TimeRange} buffered
* The state of the buffer
* @param {Numnber} currentTime
* The current time of the player
* @param {number} playbackRate
* The current playback rate of the player. Defaults to 1.
* @return {number}
* Time until the player has to start rebuffering in seconds.
* @function timeUntilRebuffer
*/
export const timeUntilRebuffer = function(buffered, currentTime, playbackRate = 1) {
const bufferedEnd = buffered.length ? buffered.end(buffered.length - 1) : 0;
return (bufferedEnd - currentTime) / playbackRate;
};
/**
* Converts a TimeRanges object into an array representation
*
* @param {TimeRanges} timeRanges
* @return {Array}
*/
export const timeRangesToArray = (timeRanges) => {
const timeRangesList = [];
for (let i = 0; i < timeRanges.length; i++) {
timeRangesList.push({
start: timeRanges.start(i),
end: timeRanges.end(i)
});
}
return timeRangesList;
};
/**
* Determines if two time range objects are different.
*
* @param {TimeRange} a
* the first time range object to check
*
* @param {TimeRange} b
* the second time range object to check
*
* @return {Boolean}
* Whether the time range objects differ
*/
export const isRangeDifferent = function(a, b) {
// same object
if (a === b) {
return false;
}
// one or the other is undefined
if (!a && b || (!b && a)) {
return true;
}
// length is different
if (a.length !== b.length) {
return true;
}
// see if any start/end pair is different
for (let i = 0; i < a.length; i++) {
if (a.start(i) !== b.start(i) || a.end(i) !== b.end(i)) {
return true;
}
}
// if the length and every pair is the same
// this is the same time range
return false;
};
export const lastBufferedEnd = function(a) {
if (!a || !a.length || !a.end) {
return;
}
return a.end(a.length - 1);
};
/**
* A utility function to add up the amount of time in a timeRange
* after a specified startTime.
* ie:[[0, 10], [20, 40], [50, 60]] with a startTime 0
* would return 40 as there are 40s seconds after 0 in the timeRange
*
* @param {TimeRange} range
* The range to check against
* @param {number} startTime
* The time in the time range that you should start counting from
*
* @return {number}
* The number of seconds in the buffer passed the specified time.
*/
export const timeAheadOf = function(range, startTime) {
let time = 0;
if (!range || !range.length) {
return time;
}
for (let i = 0; i < range.length; i++) {
const start = range.start(i);
const end = range.end(i);
// startTime is after this range entirely
if (startTime > end) {
continue;
}
// startTime is within this range
if (startTime > start && startTime <= end) {
time += end - startTime;
continue;
}
// startTime is before this range.
time += end - start;
}
return time;
};

View File

@ -0,0 +1,125 @@
import videojs from 'video.js';
import {merge} from './util/vjs-compat';
const defaultOptions = {
errorInterval: 30,
getSource(next) {
const tech = this.tech({ IWillNotUseThisInPlugins: true });
const sourceObj = tech.currentSource_ || this.currentSource();
return next(sourceObj);
}
};
/**
* Main entry point for the plugin
*
* @param {Player} player a reference to a videojs Player instance
* @param {Object} [options] an object with plugin options
* @private
*/
const initPlugin = function(player, options) {
let lastCalled = 0;
let seekTo = 0;
const localOptions = merge(defaultOptions, options);
player.ready(() => {
player.trigger({type: 'usage', name: 'vhs-error-reload-initialized'});
});
/**
* Player modifications to perform that must wait until `loadedmetadata`
* has been triggered
*
* @private
*/
const loadedMetadataHandler = function() {
if (seekTo) {
player.currentTime(seekTo);
}
};
/**
* Set the source on the player element, play, and seek if necessary
*
* @param {Object} sourceObj An object specifying the source url and mime-type to play
* @private
*/
const setSource = function(sourceObj) {
if (sourceObj === null || sourceObj === undefined) {
return;
}
seekTo = (player.duration() !== Infinity && player.currentTime()) || 0;
player.one('loadedmetadata', loadedMetadataHandler);
player.src(sourceObj);
player.trigger({type: 'usage', name: 'vhs-error-reload'});
player.play();
};
/**
* Attempt to get a source from either the built-in getSource function
* or a custom function provided via the options
*
* @private
*/
const errorHandler = function() {
// Do not attempt to reload the source if a source-reload occurred before
// 'errorInterval' time has elapsed since the last source-reload
if (Date.now() - lastCalled < localOptions.errorInterval * 1000) {
player.trigger({type: 'usage', name: 'vhs-error-reload-canceled'});
return;
}
if (!localOptions.getSource ||
typeof localOptions.getSource !== 'function') {
videojs.log.error('ERROR: reloadSourceOnError - The option getSource must be a function!');
return;
}
lastCalled = Date.now();
return localOptions.getSource.call(player, setSource);
};
/**
* Unbind any event handlers that were bound by the plugin
*
* @private
*/
const cleanupEvents = function() {
player.off('loadedmetadata', loadedMetadataHandler);
player.off('error', errorHandler);
player.off('dispose', cleanupEvents);
};
/**
* Cleanup before re-initializing the plugin
*
* @param {Object} [newOptions] an object with plugin options
* @private
*/
const reinitPlugin = function(newOptions) {
cleanupEvents();
initPlugin(player, newOptions);
};
player.on('error', errorHandler);
player.on('dispose', cleanupEvents);
// Overwrite the plugin function so that we can correctly cleanup before
// initializing the plugin
player.reloadSourceOnError = reinitPlugin;
};
/**
* Reload the source when an error is detected as long as there
* wasn't an error previously within the last 30 seconds
*
* @param {Object} [options] an object with plugin options
*/
const reloadSourceOnError = function(options) {
initPlugin(this, options);
};
export default reloadSourceOnError;

View File

@ -0,0 +1,121 @@
import { isIncompatible, isEnabled, isAudioOnly } from './playlist.js';
import { codecsForPlaylist } from './util/codecs.js';
/**
* Returns a function that acts as the Enable/disable playlist function.
*
* @param {PlaylistLoader} loader - The main playlist loader
* @param {string} playlistID - id of the playlist
* @param {Function} changePlaylistFn - A function to be called after a
* playlist's enabled-state has been changed. Will NOT be called if a
* playlist's enabled-state is unchanged
* @param {boolean=} enable - Value to set the playlist enabled-state to
* or if undefined returns the current enabled-state for the playlist
* @return {Function} Function for setting/getting enabled
*/
const enableFunction = (loader, playlistID, changePlaylistFn) => (enable) => {
const playlist = loader.main.playlists[playlistID];
const incompatible = isIncompatible(playlist);
const currentlyEnabled = isEnabled(playlist);
if (typeof enable === 'undefined') {
return currentlyEnabled;
}
if (enable) {
delete playlist.disabled;
} else {
playlist.disabled = true;
}
const metadata = {
renditionInfo: {
id: playlistID,
bandwidth: playlist.attributes.BANDWIDTH,
resolution: playlist.attributes.RESOLUTION,
codecs: playlist.attributes.CODECS
},
cause: 'fast-quality'
};
if (enable !== currentlyEnabled && !incompatible) {
// Ensure the outside world knows about our changes
if (enable) {
// call fast quality change only when the playlist is enabled
changePlaylistFn(playlist);
loader.trigger({ type: 'renditionenabled', metadata});
} else {
loader.trigger({ type: 'renditiondisabled', metadata});
}
}
return enable;
};
/**
* The representation object encapsulates the publicly visible information
* in a media playlist along with a setter/getter-type function (enabled)
* for changing the enabled-state of a particular playlist entry
*
* @class Representation
*/
class Representation {
constructor(vhsHandler, playlist, id) {
const {
playlistController_: pc
} = vhsHandler;
const qualityChangeFunction = pc.fastQualityChange_.bind(pc);
// some playlist attributes are optional
if (playlist.attributes) {
const resolution = playlist.attributes.RESOLUTION;
this.width = resolution && resolution.width;
this.height = resolution && resolution.height;
this.bandwidth = playlist.attributes.BANDWIDTH;
this.frameRate = playlist.attributes['FRAME-RATE'];
}
this.codecs = codecsForPlaylist(pc.main(), playlist);
this.playlist = playlist;
// The id is simply the ordinality of the media playlist
// within the main playlist
this.id = id;
// Partially-apply the enableFunction to create a playlist-
// specific variant
this.enabled = enableFunction(
vhsHandler.playlists,
playlist.id,
qualityChangeFunction
);
}
}
/**
* A mixin function that adds the `representations` api to an instance
* of the VhsHandler class
*
* @param {VhsHandler} vhsHandler - An instance of VhsHandler to add the
* representation API into
*/
const renditionSelectionMixin = function(vhsHandler) {
// Add a single API-specific function to the VhsHandler instance
vhsHandler.representations = () => {
const main = vhsHandler.playlistController_.main();
const playlists = isAudioOnly(main) ?
vhsHandler.playlistController_.getAudioTrackPlaylists_() :
main.playlists;
if (!playlists) {
return [];
}
return playlists
.filter((media) => !isIncompatible(media))
.map((e, i) => new Representation(vhsHandler, e, e.id));
};
};
export default renditionSelectionMixin;

View File

@ -0,0 +1,35 @@
/**
* @file resolve-url.js - Handling how URLs are resolved and manipulated
*/
import _resolveUrl from '@videojs/vhs-utils/es/resolve-url.js';
export const resolveUrl = _resolveUrl;
/**
* If the xhr request was redirected, return the responseURL, otherwise,
* return the original url.
*
* @api private
*
* @param {string} url - an url being requested
* @param {XMLHttpRequest} req - xhr request result
*
* @return {string}
*/
export const resolveManifestRedirect = (url, req) => {
// To understand how the responseURL below is set and generated:
// - https://fetch.spec.whatwg.org/#concept-response-url
// - https://fetch.spec.whatwg.org/#atomic-http-redirect-handling
if (
req &&
req.responseURL &&
url !== req.responseURL
) {
return req.responseURL;
}
return url;
};
export default resolveUrl;

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,292 @@
import TransmuxWorker from 'worker!./transmuxer-worker.js';
import videojs from 'video.js';
import { segmentInfoPayload } from './segment-loader';
export const handleData_ = (event, transmuxedData, callback) => {
const {
type,
initSegment,
captions,
captionStreams,
metadata,
videoFrameDtsTime,
videoFramePtsTime
} = event.data.segment;
transmuxedData.buffer.push({
captions,
captionStreams,
metadata
});
const boxes = event.data.segment.boxes || {
data: event.data.segment.data
};
const result = {
type,
// cast ArrayBuffer to TypedArray
data: new Uint8Array(
boxes.data,
boxes.data.byteOffset,
boxes.data.byteLength
),
initSegment: new Uint8Array(
initSegment.data,
initSegment.byteOffset,
initSegment.byteLength
)
};
if (typeof videoFrameDtsTime !== 'undefined') {
result.videoFrameDtsTime = videoFrameDtsTime;
}
if (typeof videoFramePtsTime !== 'undefined') {
result.videoFramePtsTime = videoFramePtsTime;
}
callback(result);
};
export const handleDone_ = ({
transmuxedData,
callback
}) => {
// Previously we only returned data on data events,
// not on done events. Clear out the buffer to keep that consistent.
transmuxedData.buffer = [];
// all buffers should have been flushed from the muxer, so start processing anything we
// have received
callback(transmuxedData);
};
export const handleGopInfo_ = (event, transmuxedData) => {
transmuxedData.gopInfo = event.data.gopInfo;
};
export const processTransmux = (options) => {
const {
transmuxer,
bytes,
audioAppendStart,
gopsToAlignWith,
remux,
onData,
onTrackInfo,
onAudioTimingInfo,
onVideoTimingInfo,
onVideoSegmentTimingInfo,
onAudioSegmentTimingInfo,
onId3,
onCaptions,
onDone,
onEndedTimeline,
onTransmuxerLog,
isEndOfTimeline,
segment,
triggerSegmentEventFn
} = options;
const transmuxedData = {
buffer: []
};
let waitForEndedTimelineEvent = isEndOfTimeline;
const handleMessage = (event) => {
if (transmuxer.currentTransmux !== options) {
// disposed
return;
}
if (event.data.action === 'data') {
handleData_(event, transmuxedData, onData);
}
if (event.data.action === 'trackinfo') {
onTrackInfo(event.data.trackInfo);
}
if (event.data.action === 'gopInfo') {
handleGopInfo_(event, transmuxedData);
}
if (event.data.action === 'audioTimingInfo') {
onAudioTimingInfo(event.data.audioTimingInfo);
}
if (event.data.action === 'videoTimingInfo') {
onVideoTimingInfo(event.data.videoTimingInfo);
}
if (event.data.action === 'videoSegmentTimingInfo') {
onVideoSegmentTimingInfo(event.data.videoSegmentTimingInfo);
}
if (event.data.action === 'audioSegmentTimingInfo') {
onAudioSegmentTimingInfo(event.data.audioSegmentTimingInfo);
}
if (event.data.action === 'id3Frame') {
onId3([event.data.id3Frame], event.data.id3Frame.dispatchType);
}
if (event.data.action === 'caption') {
onCaptions(event.data.caption);
}
if (event.data.action === 'endedtimeline') {
waitForEndedTimelineEvent = false;
onEndedTimeline();
}
if (event.data.action === 'log') {
onTransmuxerLog(event.data.log);
}
// wait for the transmuxed event since we may have audio and video
if (event.data.type !== 'transmuxed') {
return;
}
// If the "endedtimeline" event has not yet fired, and this segment represents the end
// of a timeline, that means there may still be data events before the segment
// processing can be considerred complete. In that case, the final event should be
// an "endedtimeline" event with the type "transmuxed."
if (waitForEndedTimelineEvent) {
return;
}
transmuxer.onmessage = null;
handleDone_({
transmuxedData,
callback: onDone
});
/* eslint-disable no-use-before-define */
dequeue(transmuxer);
/* eslint-enable */
};
const handleError = () => {
const error = {
message: 'Received an error message from the transmuxer worker',
metadata: {
errorType: videojs.Error.StreamingFailedToTransmuxSegment,
segmentInfo: segmentInfoPayload({segment})
}
};
onDone(null, error);
};
transmuxer.onmessage = handleMessage;
transmuxer.onerror = handleError;
if (audioAppendStart) {
transmuxer.postMessage({
action: 'setAudioAppendStart',
appendStart: audioAppendStart
});
}
// allow empty arrays to be passed to clear out GOPs
if (Array.isArray(gopsToAlignWith)) {
transmuxer.postMessage({
action: 'alignGopsWith',
gopsToAlignWith
});
}
if (typeof remux !== 'undefined') {
transmuxer.postMessage({
action: 'setRemux',
remux
});
}
if (bytes.byteLength) {
const buffer = bytes instanceof ArrayBuffer ? bytes : bytes.buffer;
const byteOffset = bytes instanceof ArrayBuffer ? 0 : bytes.byteOffset;
triggerSegmentEventFn({ type: 'segmenttransmuxingstart', segment });
transmuxer.postMessage(
{
action: 'push',
// Send the typed-array of data as an ArrayBuffer so that
// it can be sent as a "Transferable" and avoid the costly
// memory copy
data: buffer,
// To recreate the original typed-array, we need information
// about what portion of the ArrayBuffer it was a view into
byteOffset,
byteLength: bytes.byteLength
},
[ buffer ]
);
}
if (isEndOfTimeline) {
transmuxer.postMessage({ action: 'endTimeline' });
}
// even if we didn't push any bytes, we have to make sure we flush in case we reached
// the end of the segment
transmuxer.postMessage({ action: 'flush' });
};
export const dequeue = (transmuxer) => {
transmuxer.currentTransmux = null;
if (transmuxer.transmuxQueue.length) {
transmuxer.currentTransmux = transmuxer.transmuxQueue.shift();
if (typeof transmuxer.currentTransmux === 'function') {
transmuxer.currentTransmux();
} else {
processTransmux(transmuxer.currentTransmux);
}
}
};
export const processAction = (transmuxer, action) => {
transmuxer.postMessage({ action });
dequeue(transmuxer);
};
export const enqueueAction = (action, transmuxer) => {
if (!transmuxer.currentTransmux) {
transmuxer.currentTransmux = action;
processAction(transmuxer, action);
return;
}
transmuxer.transmuxQueue.push(processAction.bind(null, transmuxer, action));
};
export const reset = (transmuxer) => {
enqueueAction('reset', transmuxer);
};
export const endTimeline = (transmuxer) => {
enqueueAction('endTimeline', transmuxer);
};
export const transmux = (options) => {
if (!options.transmuxer.currentTransmux) {
options.transmuxer.currentTransmux = options;
processTransmux(options);
return;
}
options.transmuxer.transmuxQueue.push(options);
};
export const createTransmuxer = (options) => {
const transmuxer = new TransmuxWorker();
transmuxer.currentTransmux = null;
transmuxer.transmuxQueue = [];
const term = transmuxer.terminate;
transmuxer.terminate = () => {
transmuxer.currentTransmux = null;
transmuxer.transmuxQueue.length = 0;
return term.call(transmuxer);
};
transmuxer.postMessage({action: 'init', options});
return transmuxer;
};
export default {
reset,
endTimeline,
transmux,
createTransmuxer
};

View File

@ -0,0 +1,894 @@
/**
* @file source-updater.js
*/
import videojs from 'video.js';
import logger from './util/logger';
import noop from './util/noop';
import { bufferIntersection } from './ranges.js';
import {getMimeForCodec} from '@videojs/vhs-utils/es/codecs.js';
import window from 'global/window';
import toTitleCase from './util/to-title-case.js';
import { QUOTA_EXCEEDED_ERR } from './error-codes';
import {createTimeRanges, bufferedRangesToString} from './util/vjs-compat';
const bufferTypes = [
'video',
'audio'
];
const updating = (type, sourceUpdater) => {
const sourceBuffer = sourceUpdater[`${type}Buffer`];
return (sourceBuffer && sourceBuffer.updating) || sourceUpdater.queuePending[type];
};
const nextQueueIndexOfType = (type, queue) => {
for (let i = 0; i < queue.length; i++) {
const queueEntry = queue[i];
if (queueEntry.type === 'mediaSource') {
// If the next entry is a media source entry (uses multiple source buffers), block
// processing to allow it to go through first.
return null;
}
if (queueEntry.type === type) {
return i;
}
}
return null;
};
const shiftQueue = (type, sourceUpdater) => {
if (sourceUpdater.queue.length === 0) {
return;
}
let queueIndex = 0;
let queueEntry = sourceUpdater.queue[queueIndex];
if (queueEntry.type === 'mediaSource') {
if (!sourceUpdater.updating() && sourceUpdater.mediaSource.readyState !== 'closed') {
sourceUpdater.queue.shift();
queueEntry.action(sourceUpdater);
if (queueEntry.doneFn) {
queueEntry.doneFn();
}
// Only specific source buffer actions must wait for async updateend events. Media
// Source actions process synchronously. Therefore, both audio and video source
// buffers are now clear to process the next queue entries.
shiftQueue('audio', sourceUpdater);
shiftQueue('video', sourceUpdater);
}
// Media Source actions require both source buffers, so if the media source action
// couldn't process yet (because one or both source buffers are busy), block other
// queue actions until both are available and the media source action can process.
return;
}
if (type === 'mediaSource') {
// If the queue was shifted by a media source action (this happens when pushing a
// media source action onto the queue), then it wasn't from an updateend event from an
// audio or video source buffer, so there's no change from previous state, and no
// processing should be done.
return;
}
// Media source queue entries don't need to consider whether the source updater is
// started (i.e., source buffers are created) as they don't need the source buffers, but
// source buffer queue entries do.
if (
!sourceUpdater.ready() ||
sourceUpdater.mediaSource.readyState === 'closed' ||
updating(type, sourceUpdater)
) {
return;
}
if (queueEntry.type !== type) {
queueIndex = nextQueueIndexOfType(type, sourceUpdater.queue);
if (queueIndex === null) {
// Either there's no queue entry that uses this source buffer type in the queue, or
// there's a media source queue entry before the next entry of this type, in which
// case wait for that action to process first.
return;
}
queueEntry = sourceUpdater.queue[queueIndex];
}
sourceUpdater.queue.splice(queueIndex, 1);
// Keep a record that this source buffer type is in use.
//
// The queue pending operation must be set before the action is performed in the event
// that the action results in a synchronous event that is acted upon. For instance, if
// an exception is thrown that can be handled, it's possible that new actions will be
// appended to an empty queue and immediately executed, but would not have the correct
// pending information if this property was set after the action was performed.
sourceUpdater.queuePending[type] = queueEntry;
queueEntry.action(type, sourceUpdater);
if (!queueEntry.doneFn) {
// synchronous operation, process next entry
sourceUpdater.queuePending[type] = null;
shiftQueue(type, sourceUpdater);
return;
}
};
const cleanupBuffer = (type, sourceUpdater) => {
const buffer = sourceUpdater[`${type}Buffer`];
const titleType = toTitleCase(type);
if (!buffer) {
return;
}
buffer.removeEventListener('updateend', sourceUpdater[`on${titleType}UpdateEnd_`]);
buffer.removeEventListener('error', sourceUpdater[`on${titleType}Error_`]);
sourceUpdater.codecs[type] = null;
sourceUpdater[`${type}Buffer`] = null;
};
const inSourceBuffers = (mediaSource, sourceBuffer) => mediaSource && sourceBuffer &&
Array.prototype.indexOf.call(mediaSource.sourceBuffers, sourceBuffer) !== -1;
const actions = {
appendBuffer: (bytes, segmentInfo, onError) => (type, sourceUpdater) => {
const sourceBuffer = sourceUpdater[`${type}Buffer`];
// can't do anything if the media source / source buffer is null
// or the media source does not contain this source buffer.
if (!inSourceBuffers(sourceUpdater.mediaSource, sourceBuffer)) {
return;
}
sourceUpdater.logger_(`Appending segment ${segmentInfo.mediaIndex}'s ${bytes.length} bytes to ${type}Buffer`);
try {
sourceBuffer.appendBuffer(bytes);
} catch (e) {
sourceUpdater.logger_(`Error with code ${e.code} ` +
(e.code === QUOTA_EXCEEDED_ERR ? '(QUOTA_EXCEEDED_ERR) ' : '') +
`when appending segment ${segmentInfo.mediaIndex} to ${type}Buffer`);
sourceUpdater.queuePending[type] = null;
onError(e);
}
},
remove: (start, end) => (type, sourceUpdater) => {
const sourceBuffer = sourceUpdater[`${type}Buffer`];
// can't do anything if the media source / source buffer is null
// or the media source does not contain this source buffer.
if (!inSourceBuffers(sourceUpdater.mediaSource, sourceBuffer)) {
return;
}
sourceUpdater.logger_(`Removing ${start} to ${end} from ${type}Buffer`);
try {
sourceBuffer.remove(start, end);
} catch (e) {
sourceUpdater.logger_(`Remove ${start} to ${end} from ${type}Buffer failed`);
}
},
timestampOffset: (offset) => (type, sourceUpdater) => {
const sourceBuffer = sourceUpdater[`${type}Buffer`];
// can't do anything if the media source / source buffer is null
// or the media source does not contain this source buffer.
if (!inSourceBuffers(sourceUpdater.mediaSource, sourceBuffer)) {
return;
}
sourceUpdater.logger_(`Setting ${type}timestampOffset to ${offset}`);
sourceBuffer.timestampOffset = offset;
},
callback: (callback) => (type, sourceUpdater) => {
callback();
},
endOfStream: (error) => (sourceUpdater) => {
if (sourceUpdater.mediaSource.readyState !== 'open') {
return;
}
sourceUpdater.logger_(`Calling mediaSource endOfStream(${error || ''})`);
try {
sourceUpdater.mediaSource.endOfStream(error);
} catch (e) {
videojs.log.warn('Failed to call media source endOfStream', e);
}
},
duration: (duration) => (sourceUpdater) => {
sourceUpdater.logger_(`Setting mediaSource duration to ${duration}`);
try {
sourceUpdater.mediaSource.duration = duration;
} catch (e) {
videojs.log.warn('Failed to set media source duration', e);
}
},
abort: () => (type, sourceUpdater) => {
if (sourceUpdater.mediaSource.readyState !== 'open') {
return;
}
const sourceBuffer = sourceUpdater[`${type}Buffer`];
// can't do anything if the media source / source buffer is null
// or the media source does not contain this source buffer.
if (!inSourceBuffers(sourceUpdater.mediaSource, sourceBuffer)) {
return;
}
sourceUpdater.logger_(`calling abort on ${type}Buffer`);
try {
sourceBuffer.abort();
} catch (e) {
videojs.log.warn(`Failed to abort on ${type}Buffer`, e);
}
},
addSourceBuffer: (type, codec) => (sourceUpdater) => {
const titleType = toTitleCase(type);
const mime = getMimeForCodec(codec);
sourceUpdater.logger_(`Adding ${type}Buffer with codec ${codec} to mediaSource`);
const sourceBuffer = sourceUpdater.mediaSource.addSourceBuffer(mime);
sourceBuffer.addEventListener('updateend', sourceUpdater[`on${titleType}UpdateEnd_`]);
sourceBuffer.addEventListener('error', sourceUpdater[`on${titleType}Error_`]);
sourceUpdater.codecs[type] = codec;
sourceUpdater[`${type}Buffer`] = sourceBuffer;
},
removeSourceBuffer: (type) => (sourceUpdater) => {
const sourceBuffer = sourceUpdater[`${type}Buffer`];
cleanupBuffer(type, sourceUpdater);
// can't do anything if the media source / source buffer is null
// or the media source does not contain this source buffer.
if (!inSourceBuffers(sourceUpdater.mediaSource, sourceBuffer)) {
return;
}
sourceUpdater.logger_(`Removing ${type}Buffer with codec ${sourceUpdater.codecs[type]} from mediaSource`);
try {
sourceUpdater.mediaSource.removeSourceBuffer(sourceBuffer);
} catch (e) {
videojs.log.warn(`Failed to removeSourceBuffer ${type}Buffer`, e);
}
},
changeType: (codec) => (type, sourceUpdater) => {
const sourceBuffer = sourceUpdater[`${type}Buffer`];
const mime = getMimeForCodec(codec);
// can't do anything if the media source / source buffer is null
// or the media source does not contain this source buffer.
if (!inSourceBuffers(sourceUpdater.mediaSource, sourceBuffer)) {
return;
}
// do not update codec if we don't need to.
// Only update if we change the codec base.
// For example, going from avc1.640028 to avc1.64001f does not require a changeType call.
const newCodecBase = codec.substring(0, codec.indexOf('.'));
const oldCodec = sourceUpdater.codecs[type];
const oldCodecBase = oldCodec.substring(0, oldCodec.indexOf('.'));
if (oldCodecBase === newCodecBase) {
return;
}
const metadata = {
codecsChangeInfo: {
from: oldCodec,
to: codec
}
};
sourceUpdater.trigger({ type: 'codecschange', metadata });
sourceUpdater.logger_(`changing ${type}Buffer codec from ${oldCodec} to ${codec}`);
// check if change to the provided type is supported
try {
sourceBuffer.changeType(mime);
sourceUpdater.codecs[type] = codec;
} catch (e) {
metadata.errorType = videojs.Error.StreamingCodecsChangeError;
metadata.error = e;
e.metadata = metadata;
sourceUpdater.error_ = e;
sourceUpdater.trigger('error');
videojs.log.warn(`Failed to changeType on ${type}Buffer`, e);
}
}
};
const pushQueue = ({type, sourceUpdater, action, doneFn, name}) => {
sourceUpdater.queue.push({
type,
action,
doneFn,
name
});
shiftQueue(type, sourceUpdater);
};
const onUpdateend = (type, sourceUpdater) => (e) => {
// Although there should, in theory, be a pending action for any updateend receieved,
// there are some actions that may trigger updateend events without set definitions in
// the w3c spec. For instance, setting the duration on the media source may trigger
// updateend events on source buffers. This does not appear to be in the spec. As such,
// if we encounter an updateend without a corresponding pending action from our queue
// for that source buffer type, process the next action.
const bufferedRangesForType = sourceUpdater[`${type}Buffered`]();
const descriptiveString = bufferedRangesToString(bufferedRangesForType);
sourceUpdater.logger_(`received "updateend" event for ${type} Source Buffer: `, descriptiveString);
if (sourceUpdater.queuePending[type]) {
const doneFn = sourceUpdater.queuePending[type].doneFn;
sourceUpdater.queuePending[type] = null;
if (doneFn) {
// if there's an error, report it
doneFn(sourceUpdater[`${type}Error_`]);
}
}
shiftQueue(type, sourceUpdater);
};
/**
* A queue of callbacks to be serialized and applied when a
* MediaSource and its associated SourceBuffers are not in the
* updating state. It is used by the segment loader to update the
* underlying SourceBuffers when new data is loaded, for instance.
*
* @class SourceUpdater
* @param {MediaSource} mediaSource the MediaSource to create the SourceBuffer from
* @param {string} mimeType the desired MIME type of the underlying SourceBuffer
*/
export default class SourceUpdater extends videojs.EventTarget {
constructor(mediaSource) {
super();
this.mediaSource = mediaSource;
this.sourceopenListener_ = () => shiftQueue('mediaSource', this);
this.mediaSource.addEventListener('sourceopen', this.sourceopenListener_);
this.logger_ = logger('SourceUpdater');
// initial timestamp offset is 0
this.audioTimestampOffset_ = 0;
this.videoTimestampOffset_ = 0;
this.queue = [];
this.queuePending = {
audio: null,
video: null
};
this.delayedAudioAppendQueue_ = [];
this.videoAppendQueued_ = false;
this.codecs = {};
this.onVideoUpdateEnd_ = onUpdateend('video', this);
this.onAudioUpdateEnd_ = onUpdateend('audio', this);
this.onVideoError_ = (e) => {
// used for debugging
this.videoError_ = e;
};
this.onAudioError_ = (e) => {
// used for debugging
this.audioError_ = e;
};
this.createdSourceBuffers_ = false;
this.initializedEme_ = false;
this.triggeredReady_ = false;
}
initializedEme() {
this.initializedEme_ = true;
this.triggerReady();
}
hasCreatedSourceBuffers() {
// if false, likely waiting on one of the segment loaders to get enough data to create
// source buffers
return this.createdSourceBuffers_;
}
hasInitializedAnyEme() {
return this.initializedEme_;
}
ready() {
return this.hasCreatedSourceBuffers() && this.hasInitializedAnyEme();
}
createSourceBuffers(codecs) {
if (this.hasCreatedSourceBuffers()) {
// already created them before
return;
}
// the intial addOrChangeSourceBuffers will always be
// two add buffers.
this.addOrChangeSourceBuffers(codecs);
this.createdSourceBuffers_ = true;
this.trigger('createdsourcebuffers');
this.triggerReady();
}
triggerReady() {
// only allow ready to be triggered once, this prevents the case
// where:
// 1. we trigger createdsourcebuffers
// 2. ie 11 synchronously initializates eme
// 3. the synchronous initialization causes us to trigger ready
// 4. We go back to the ready check in createSourceBuffers and ready is triggered again.
if (this.ready() && !this.triggeredReady_) {
this.triggeredReady_ = true;
this.trigger('ready');
}
}
/**
* Add a type of source buffer to the media source.
*
* @param {string} type
* The type of source buffer to add.
*
* @param {string} codec
* The codec to add the source buffer with.
*/
addSourceBuffer(type, codec) {
pushQueue({
type: 'mediaSource',
sourceUpdater: this,
action: actions.addSourceBuffer(type, codec),
name: 'addSourceBuffer'
});
}
/**
* call abort on a source buffer.
*
* @param {string} type
* The type of source buffer to call abort on.
*/
abort(type) {
pushQueue({
type,
sourceUpdater: this,
action: actions.abort(type),
name: 'abort'
});
}
/**
* Call removeSourceBuffer and remove a specific type
* of source buffer on the mediaSource.
*
* @param {string} type
* The type of source buffer to remove.
*/
removeSourceBuffer(type) {
if (!this.canRemoveSourceBuffer()) {
videojs.log.error('removeSourceBuffer is not supported!');
return;
}
pushQueue({
type: 'mediaSource',
sourceUpdater: this,
action: actions.removeSourceBuffer(type),
name: 'removeSourceBuffer'
});
}
/**
* Whether or not the removeSourceBuffer function is supported
* on the mediaSource.
*
* @return {boolean}
* if removeSourceBuffer can be called.
*/
canRemoveSourceBuffer() {
// As of Firefox 83 removeSourceBuffer
// throws errors, so we report that it does not support this.
return !videojs.browser.IS_FIREFOX && window.MediaSource &&
window.MediaSource.prototype &&
typeof window.MediaSource.prototype.removeSourceBuffer === 'function';
}
/**
* Whether or not the changeType function is supported
* on our SourceBuffers.
*
* @return {boolean}
* if changeType can be called.
*/
static canChangeType() {
return window.SourceBuffer &&
window.SourceBuffer.prototype &&
typeof window.SourceBuffer.prototype.changeType === 'function';
}
/**
* Whether or not the changeType function is supported
* on our SourceBuffers.
*
* @return {boolean}
* if changeType can be called.
*/
canChangeType() {
return this.constructor.canChangeType();
}
/**
* Call the changeType function on a source buffer, given the code and type.
*
* @param {string} type
* The type of source buffer to call changeType on.
*
* @param {string} codec
* The codec string to change type with on the source buffer.
*/
changeType(type, codec) {
if (!this.canChangeType()) {
videojs.log.error('changeType is not supported!');
return;
}
pushQueue({
type,
sourceUpdater: this,
action: actions.changeType(codec),
name: 'changeType'
});
}
/**
* Add source buffers with a codec or, if they are already created,
* call changeType on source buffers using changeType.
*
* @param {Object} codecs
* Codecs to switch to
*/
addOrChangeSourceBuffers(codecs) {
if (!codecs || typeof codecs !== 'object' || Object.keys(codecs).length === 0) {
throw new Error('Cannot addOrChangeSourceBuffers to undefined codecs');
}
Object.keys(codecs).forEach((type) => {
const codec = codecs[type];
if (!this.hasCreatedSourceBuffers()) {
return this.addSourceBuffer(type, codec);
}
if (this.canChangeType()) {
this.changeType(type, codec);
}
});
}
/**
* Queue an update to append an ArrayBuffer.
*
* @param {MediaObject} object containing audioBytes and/or videoBytes
* @param {Function} done the function to call when done
* @see http://www.w3.org/TR/media-source/#widl-SourceBuffer-appendBuffer-void-ArrayBuffer-data
*/
appendBuffer(options, doneFn) {
const {segmentInfo, type, bytes} = options;
this.processedAppend_ = true;
if (type === 'audio' && this.videoBuffer && !this.videoAppendQueued_) {
this.delayedAudioAppendQueue_.push([options, doneFn]);
this.logger_(`delayed audio append of ${bytes.length} until video append`);
return;
}
// In the case of certain errors, for instance, QUOTA_EXCEEDED_ERR, updateend will
// not be fired. This means that the queue will be blocked until the next action
// taken by the segment-loader. Provide a mechanism for segment-loader to handle
// these errors by calling the doneFn with the specific error.
const onError = doneFn;
pushQueue({
type,
sourceUpdater: this,
action: actions.appendBuffer(bytes, segmentInfo || {mediaIndex: -1}, onError),
doneFn,
name: 'appendBuffer'
});
if (type === 'video') {
this.videoAppendQueued_ = true;
if (!this.delayedAudioAppendQueue_.length) {
return;
}
const queue = this.delayedAudioAppendQueue_.slice();
this.logger_(`queuing delayed audio ${queue.length} appendBuffers`);
this.delayedAudioAppendQueue_.length = 0;
queue.forEach((que) => {
this.appendBuffer.apply(this, que);
});
}
}
/**
* Get the audio buffer's buffered timerange.
*
* @return {TimeRange}
* The audio buffer's buffered time range
*/
audioBuffered() {
// no media source/source buffer or it isn't in the media sources
// source buffer list
if (!inSourceBuffers(this.mediaSource, this.audioBuffer)) {
return createTimeRanges();
}
return this.audioBuffer.buffered ? this.audioBuffer.buffered :
createTimeRanges();
}
/**
* Get the video buffer's buffered timerange.
*
* @return {TimeRange}
* The video buffer's buffered time range
*/
videoBuffered() {
// no media source/source buffer or it isn't in the media sources
// source buffer list
if (!inSourceBuffers(this.mediaSource, this.videoBuffer)) {
return createTimeRanges();
}
return this.videoBuffer.buffered ? this.videoBuffer.buffered :
createTimeRanges();
}
/**
* Get a combined video/audio buffer's buffered timerange.
*
* @return {TimeRange}
* the combined time range
*/
buffered() {
const video = inSourceBuffers(this.mediaSource, this.videoBuffer) ? this.videoBuffer : null;
const audio = inSourceBuffers(this.mediaSource, this.audioBuffer) ? this.audioBuffer : null;
if (audio && !video) {
return this.audioBuffered();
}
if (video && !audio) {
return this.videoBuffered();
}
return bufferIntersection(this.audioBuffered(), this.videoBuffered());
}
/**
* Add a callback to the queue that will set duration on the mediaSource.
*
* @param {number} duration
* The duration to set
*
* @param {Function} [doneFn]
* function to run after duration has been set.
*/
setDuration(duration, doneFn = noop) {
// In order to set the duration on the media source, it's necessary to wait for all
// source buffers to no longer be updating. "If the updating attribute equals true on
// any SourceBuffer in sourceBuffers, then throw an InvalidStateError exception and
// abort these steps." (source: https://www.w3.org/TR/media-source/#attributes).
pushQueue({
type: 'mediaSource',
sourceUpdater: this,
action: actions.duration(duration),
name: 'duration',
doneFn
});
}
/**
* Add a mediaSource endOfStream call to the queue
*
* @param {Error} [error]
* Call endOfStream with an error
*
* @param {Function} [doneFn]
* A function that should be called when the
* endOfStream call has finished.
*/
endOfStream(error = null, doneFn = noop) {
if (typeof error !== 'string') {
error = undefined;
}
// In order to set the duration on the media source, it's necessary to wait for all
// source buffers to no longer be updating. "If the updating attribute equals true on
// any SourceBuffer in sourceBuffers, then throw an InvalidStateError exception and
// abort these steps." (source: https://www.w3.org/TR/media-source/#attributes).
pushQueue({
type: 'mediaSource',
sourceUpdater: this,
action: actions.endOfStream(error),
name: 'endOfStream',
doneFn
});
}
/**
* Queue an update to remove a time range from the buffer.
*
* @param {number} start where to start the removal
* @param {number} end where to end the removal
* @param {Function} [done=noop] optional callback to be executed when the remove
* operation is complete
* @see http://www.w3.org/TR/media-source/#widl-SourceBuffer-remove-void-double-start-unrestricted-double-end
*/
removeAudio(start, end, done = noop) {
if (!this.audioBuffered().length || this.audioBuffered().end(0) === 0) {
done();
return;
}
pushQueue({
type: 'audio',
sourceUpdater: this,
action: actions.remove(start, end),
doneFn: done,
name: 'remove'
});
}
/**
* Queue an update to remove a time range from the buffer.
*
* @param {number} start where to start the removal
* @param {number} end where to end the removal
* @param {Function} [done=noop] optional callback to be executed when the remove
* operation is complete
* @see http://www.w3.org/TR/media-source/#widl-SourceBuffer-remove-void-double-start-unrestricted-double-end
*/
removeVideo(start, end, done = noop) {
if (!this.videoBuffered().length || this.videoBuffered().end(0) === 0) {
done();
return;
}
pushQueue({
type: 'video',
sourceUpdater: this,
action: actions.remove(start, end),
doneFn: done,
name: 'remove'
});
}
/**
* Whether the underlying sourceBuffer is updating or not
*
* @return {boolean} the updating status of the SourceBuffer
*/
updating() {
// the audio/video source buffer is updating
if (updating('audio', this) || updating('video', this)) {
return true;
}
return false;
}
/**
* Set/get the timestampoffset on the audio SourceBuffer
*
* @return {number} the timestamp offset
*/
audioTimestampOffset(offset) {
if (typeof offset !== 'undefined' &&
this.audioBuffer &&
// no point in updating if it's the same
this.audioTimestampOffset_ !== offset) {
pushQueue({
type: 'audio',
sourceUpdater: this,
action: actions.timestampOffset(offset),
name: 'timestampOffset'
});
this.audioTimestampOffset_ = offset;
}
return this.audioTimestampOffset_;
}
/**
* Set/get the timestampoffset on the video SourceBuffer
*
* @return {number} the timestamp offset
*/
videoTimestampOffset(offset) {
if (typeof offset !== 'undefined' &&
this.videoBuffer &&
// no point in updating if it's the same
this.videoTimestampOffset_ !== offset) {
pushQueue({
type: 'video',
sourceUpdater: this,
action: actions.timestampOffset(offset),
name: 'timestampOffset'
});
this.videoTimestampOffset_ = offset;
}
return this.videoTimestampOffset_;
}
/**
* Add a function to the queue that will be called
* when it is its turn to run in the audio queue.
*
* @param {Function} callback
* The callback to queue.
*/
audioQueueCallback(callback) {
if (!this.audioBuffer) {
return;
}
pushQueue({
type: 'audio',
sourceUpdater: this,
action: actions.callback(callback),
name: 'callback'
});
}
/**
* Add a function to the queue that will be called
* when it is its turn to run in the video queue.
*
* @param {Function} callback
* The callback to queue.
*/
videoQueueCallback(callback) {
if (!this.videoBuffer) {
return;
}
pushQueue({
type: 'video',
sourceUpdater: this,
action: actions.callback(callback),
name: 'callback'
});
}
/**
* dispose of the source updater and the underlying sourceBuffer
*/
dispose() {
this.trigger('dispose');
bufferTypes.forEach((type) => {
this.abort(type);
if (this.canRemoveSourceBuffer()) {
this.removeSourceBuffer(type);
} else {
this[`${type}QueueCallback`](() => cleanupBuffer(type, this));
}
});
this.videoAppendQueued_ = false;
this.delayedAudioAppendQueue_.length = 0;
if (this.sourceopenListener_) {
this.mediaSource.removeEventListener('sourceopen', this.sourceopenListener_);
}
this.off();
}
}

View File

@ -0,0 +1,692 @@
/**
* @file sync-controller.js
*/
import {sumDurations, getPartsAndSegments} from './playlist';
import videojs from 'video.js';
import logger from './util/logger';
import {MediaSequenceSync, DependantMediaSequenceSync} from './util/media-sequence-sync';
// The maximum gap allowed between two media sequence tags when trying to
// synchronize expired playlist segments.
// the max media sequence diff is 48 hours of live stream
// content with two second segments. Anything larger than that
// will likely be invalid.
const MAX_MEDIA_SEQUENCE_DIFF_FOR_SYNC = 86400;
export const syncPointStrategies = [
// Stategy "VOD": Handle the VOD-case where the sync-point is *always*
// the equivalence display-time 0 === segment-index 0
{
name: 'VOD',
run: (syncController, playlist, duration, currentTimeline, currentTime) => {
if (duration !== Infinity) {
const syncPoint = {
time: 0,
segmentIndex: 0,
partIndex: null
};
return syncPoint;
}
return null;
}
},
{
name: 'MediaSequence',
/**
* run media sequence strategy
*
* @param {SyncController} syncController
* @param {Object} playlist
* @param {number} duration
* @param {number} currentTimeline
* @param {number} currentTime
* @param {string} type
*/
run: (syncController, playlist, duration, currentTimeline, currentTime, type) => {
const mediaSequenceSync = syncController.getMediaSequenceSync(type);
if (!mediaSequenceSync) {
return null;
}
if (!mediaSequenceSync.isReliable) {
return null;
}
const syncInfo = mediaSequenceSync.getSyncInfoForTime(currentTime);
if (!syncInfo) {
return null;
}
return {
time: syncInfo.start,
partIndex: syncInfo.partIndex,
segmentIndex: syncInfo.segmentIndex
};
}
},
// Stategy "ProgramDateTime": We have a program-date-time tag in this playlist
{
name: 'ProgramDateTime',
run: (syncController, playlist, duration, currentTimeline, currentTime) => {
if (!Object.keys(syncController.timelineToDatetimeMappings).length) {
return null;
}
let syncPoint = null;
let lastDistance = null;
const partsAndSegments = getPartsAndSegments(playlist);
currentTime = currentTime || 0;
for (let i = 0; i < partsAndSegments.length; i++) {
// start from the end and loop backwards for live
// or start from the front and loop forwards for non-live
const index = (playlist.endList || currentTime === 0) ? i : partsAndSegments.length - (i + 1);
const partAndSegment = partsAndSegments[index];
const segment = partAndSegment.segment;
const datetimeMapping =
syncController.timelineToDatetimeMappings[segment.timeline];
if (!datetimeMapping || !segment.dateTimeObject) {
continue;
}
const segmentTime = segment.dateTimeObject.getTime() / 1000;
let start = segmentTime + datetimeMapping;
// take part duration into account.
if (segment.parts && typeof partAndSegment.partIndex === 'number') {
for (let z = 0; z < partAndSegment.partIndex; z++) {
start += segment.parts[z].duration;
}
}
const distance = Math.abs(currentTime - start);
// Once the distance begins to increase, or if distance is 0, we have passed
// currentTime and can stop looking for better candidates
if (lastDistance !== null && (distance === 0 || lastDistance < distance)) {
break;
}
lastDistance = distance;
syncPoint = {
time: start,
segmentIndex: partAndSegment.segmentIndex,
partIndex: partAndSegment.partIndex
};
}
return syncPoint;
}
},
// Stategy "Segment": We have a known time mapping for a timeline and a
// segment in the current timeline with timing data
{
name: 'Segment',
run: (syncController, playlist, duration, currentTimeline, currentTime) => {
let syncPoint = null;
let lastDistance = null;
currentTime = currentTime || 0;
const partsAndSegments = getPartsAndSegments(playlist);
for (let i = 0; i < partsAndSegments.length; i++) {
// start from the end and loop backwards for live
// or start from the front and loop forwards for non-live
const index = (playlist.endList || currentTime === 0) ? i : partsAndSegments.length - (i + 1);
const partAndSegment = partsAndSegments[index];
const segment = partAndSegment.segment;
const start = partAndSegment.part && partAndSegment.part.start || segment && segment.start;
if (segment.timeline === currentTimeline && typeof start !== 'undefined') {
const distance = Math.abs(currentTime - start);
// Once the distance begins to increase, we have passed
// currentTime and can stop looking for better candidates
if (lastDistance !== null && lastDistance < distance) {
break;
}
if (!syncPoint || lastDistance === null || lastDistance >= distance) {
lastDistance = distance;
syncPoint = {
time: start,
segmentIndex: partAndSegment.segmentIndex,
partIndex: partAndSegment.partIndex
};
}
}
}
return syncPoint;
}
},
// Stategy "Discontinuity": We have a discontinuity with a known
// display-time
{
name: 'Discontinuity',
run: (syncController, playlist, duration, currentTimeline, currentTime) => {
let syncPoint = null;
currentTime = currentTime || 0;
if (playlist.discontinuityStarts && playlist.discontinuityStarts.length) {
let lastDistance = null;
for (let i = 0; i < playlist.discontinuityStarts.length; i++) {
const segmentIndex = playlist.discontinuityStarts[i];
const discontinuity = playlist.discontinuitySequence + i + 1;
const discontinuitySync = syncController.discontinuities[discontinuity];
if (discontinuitySync) {
const distance = Math.abs(currentTime - discontinuitySync.time);
// Once the distance begins to increase, we have passed
// currentTime and can stop looking for better candidates
if (lastDistance !== null && lastDistance < distance) {
break;
}
if (!syncPoint || lastDistance === null || lastDistance >= distance) {
lastDistance = distance;
syncPoint = {
time: discontinuitySync.time,
segmentIndex,
partIndex: null
};
}
}
}
}
return syncPoint;
}
},
// Stategy "Playlist": We have a playlist with a known mapping of
// segment index to display time
{
name: 'Playlist',
run: (syncController, playlist, duration, currentTimeline, currentTime) => {
if (playlist.syncInfo) {
const syncPoint = {
time: playlist.syncInfo.time,
segmentIndex: playlist.syncInfo.mediaSequence - playlist.mediaSequence,
partIndex: null
};
return syncPoint;
}
return null;
}
}
];
export default class SyncController extends videojs.EventTarget {
constructor(options = {}) {
super();
// ...for synching across variants
this.timelines = [];
this.discontinuities = [];
this.timelineToDatetimeMappings = {};
// TODO: this map should be only available for HLS. Since only HLS has MediaSequence.
// For some reason this map helps with syncing between quality switch for MPEG-DASH as well.
// Moreover if we disable this map for MPEG-DASH - quality switch will be broken.
// MPEG-DASH should have its own separate sync strategy
const main = new MediaSequenceSync();
const audio = new DependantMediaSequenceSync(main);
const vtt = new DependantMediaSequenceSync(main);
this.mediaSequenceStorage_ = {main, audio, vtt};
this.logger_ = logger('SyncController');
}
/**
*
* @param {string} loaderType
* @return {MediaSequenceSync|null}
*/
getMediaSequenceSync(loaderType) {
return this.mediaSequenceStorage_[loaderType] || null;
}
/**
* Find a sync-point for the playlist specified
*
* A sync-point is defined as a known mapping from display-time to
* a segment-index in the current playlist.
*
* @param {Playlist} playlist
* The playlist that needs a sync-point
* @param {number} duration
* Duration of the MediaSource (Infinite if playing a live source)
* @param {number} currentTimeline
* The last timeline from which a segment was loaded
* @param {number} currentTime
* Current player's time
* @param {string} type
* Segment loader type
* @return {Object}
* A sync-point object
*/
getSyncPoint(playlist, duration, currentTimeline, currentTime, type) {
// Always use VOD sync point for VOD
if (duration !== Infinity) {
const vodSyncPointStrategy = syncPointStrategies.find(({ name }) => name === 'VOD');
return vodSyncPointStrategy.run(this, playlist, duration);
}
const syncPoints = this.runStrategies_(
playlist,
duration,
currentTimeline,
currentTime,
type
);
if (!syncPoints.length) {
// Signal that we need to attempt to get a sync-point manually
// by fetching a segment in the playlist and constructing
// a sync-point from that information
return null;
}
// If we have exact match just return it instead of finding the nearest distance
for (const syncPointInfo of syncPoints) {
const { syncPoint, strategy } = syncPointInfo;
const { segmentIndex, time } = syncPoint;
if (segmentIndex < 0) {
continue;
}
const selectedSegment = playlist.segments[segmentIndex];
const start = time;
const end = start + selectedSegment.duration;
this.logger_(`Strategy: ${strategy}. Current time: ${currentTime}. selected segment: ${segmentIndex}. Time: [${start} -> ${end}]}`);
if (currentTime >= start && currentTime < end) {
this.logger_('Found sync point with exact match: ', syncPoint);
return syncPoint;
}
}
// Now find the sync-point that is closest to the currentTime because
// that should result in the most accurate guess about which segment
// to fetch
return this.selectSyncPoint_(syncPoints, { key: 'time', value: currentTime });
}
/**
* Calculate the amount of time that has expired off the playlist during playback
*
* @param {Playlist} playlist
* Playlist object to calculate expired from
* @param {number} duration
* Duration of the MediaSource (Infinity if playling a live source)
* @return {number|null}
* The amount of time that has expired off the playlist during playback. Null
* if no sync-points for the playlist can be found.
*/
getExpiredTime(playlist, duration) {
if (!playlist || !playlist.segments) {
return null;
}
const syncPoints = this.runStrategies_(
playlist,
duration,
playlist.discontinuitySequence,
0
);
// Without sync-points, there is not enough information to determine the expired time
if (!syncPoints.length) {
return null;
}
const syncPoint = this.selectSyncPoint_(syncPoints, {
key: 'segmentIndex',
value: 0
});
// If the sync-point is beyond the start of the playlist, we want to subtract the
// duration from index 0 to syncPoint.segmentIndex instead of adding.
if (syncPoint.segmentIndex > 0) {
syncPoint.time *= -1;
}
return Math.abs(syncPoint.time + sumDurations({
defaultDuration: playlist.targetDuration,
durationList: playlist.segments,
startIndex: syncPoint.segmentIndex,
endIndex: 0
}));
}
/**
* Runs each sync-point strategy and returns a list of sync-points returned by the
* strategies
*
* @private
* @param {Playlist} playlist
* The playlist that needs a sync-point
* @param {number} duration
* Duration of the MediaSource (Infinity if playing a live source)
* @param {number} currentTimeline
* The last timeline from which a segment was loaded
* @param {number} currentTime
* Current player's time
* @param {string} type
* Segment loader type
* @return {Array}
* A list of sync-point objects
*/
runStrategies_(playlist, duration, currentTimeline, currentTime, type) {
const syncPoints = [];
// Try to find a sync-point in by utilizing various strategies...
for (let i = 0; i < syncPointStrategies.length; i++) {
const strategy = syncPointStrategies[i];
const syncPoint = strategy.run(
this,
playlist,
duration,
currentTimeline,
currentTime,
type
);
if (syncPoint) {
syncPoint.strategy = strategy.name;
syncPoints.push({
strategy: strategy.name,
syncPoint
});
}
}
return syncPoints;
}
/**
* Selects the sync-point nearest the specified target
*
* @private
* @param {Array} syncPoints
* List of sync-points to select from
* @param {Object} target
* Object specifying the property and value we are targeting
* @param {string} target.key
* Specifies the property to target. Must be either 'time' or 'segmentIndex'
* @param {number} target.value
* The value to target for the specified key.
* @return {Object}
* The sync-point nearest the target
*/
selectSyncPoint_(syncPoints, target) {
let bestSyncPoint = syncPoints[0].syncPoint;
let bestDistance = Math.abs(syncPoints[0].syncPoint[target.key] - target.value);
let bestStrategy = syncPoints[0].strategy;
for (let i = 1; i < syncPoints.length; i++) {
const newDistance = Math.abs(syncPoints[i].syncPoint[target.key] - target.value);
if (newDistance < bestDistance) {
bestDistance = newDistance;
bestSyncPoint = syncPoints[i].syncPoint;
bestStrategy = syncPoints[i].strategy;
}
}
this.logger_(`syncPoint for [${target.key}: ${target.value}] chosen with strategy` +
` [${bestStrategy}]: [time:${bestSyncPoint.time},` +
` segmentIndex:${bestSyncPoint.segmentIndex}` +
(typeof bestSyncPoint.partIndex === 'number' ? `,partIndex:${bestSyncPoint.partIndex}` : '') +
']');
return bestSyncPoint;
}
/**
* Save any meta-data present on the segments when segments leave
* the live window to the playlist to allow for synchronization at the
* playlist level later.
*
* @param {Playlist} oldPlaylist - The previous active playlist
* @param {Playlist} newPlaylist - The updated and most current playlist
*/
saveExpiredSegmentInfo(oldPlaylist, newPlaylist) {
const mediaSequenceDiff = newPlaylist.mediaSequence - oldPlaylist.mediaSequence;
// Ignore large media sequence gaps
if (mediaSequenceDiff > MAX_MEDIA_SEQUENCE_DIFF_FOR_SYNC) {
videojs.log.warn(`Not saving expired segment info. Media sequence gap ${mediaSequenceDiff} is too large.`);
return;
}
// When a segment expires from the playlist and it has a start time
// save that information as a possible sync-point reference in future
for (let i = mediaSequenceDiff - 1; i >= 0; i--) {
const lastRemovedSegment = oldPlaylist.segments[i];
if (lastRemovedSegment && typeof lastRemovedSegment.start !== 'undefined') {
newPlaylist.syncInfo = {
mediaSequence: oldPlaylist.mediaSequence + i,
time: lastRemovedSegment.start
};
this.logger_(`playlist refresh sync: [time:${newPlaylist.syncInfo.time},` +
` mediaSequence: ${newPlaylist.syncInfo.mediaSequence}]`);
this.trigger('syncinfoupdate');
break;
}
}
}
/**
* Save the mapping from playlist's ProgramDateTime to display. This should only happen
* before segments start to load.
*
* @param {Playlist} playlist - The currently active playlist
*/
setDateTimeMappingForStart(playlist) {
// It's possible for the playlist to be updated before playback starts, meaning time
// zero is not yet set. If, during these playlist refreshes, a discontinuity is
// crossed, then the old time zero mapping (for the prior timeline) would be retained
// unless the mappings are cleared.
this.timelineToDatetimeMappings = {};
if (playlist.segments &&
playlist.segments.length &&
playlist.segments[0].dateTimeObject) {
const firstSegment = playlist.segments[0];
const playlistTimestamp = firstSegment.dateTimeObject.getTime() / 1000;
this.timelineToDatetimeMappings[firstSegment.timeline] = -playlistTimestamp;
}
}
/**
* Calculates and saves timeline mappings, playlist sync info, and segment timing values
* based on the latest timing information.
*
* @param {Object} options
* Options object
* @param {SegmentInfo} options.segmentInfo
* The current active request information
* @param {boolean} options.shouldSaveTimelineMapping
* If there's a timeline change, determines if the timeline mapping should be
* saved for timeline mapping and program date time mappings.
*/
saveSegmentTimingInfo({ segmentInfo, shouldSaveTimelineMapping }) {
const didCalculateSegmentTimeMapping = this.calculateSegmentTimeMapping_(
segmentInfo,
segmentInfo.timingInfo,
shouldSaveTimelineMapping
);
const segment = segmentInfo.segment;
if (didCalculateSegmentTimeMapping) {
this.saveDiscontinuitySyncInfo_(segmentInfo);
// If the playlist does not have sync information yet, record that information
// now with segment timing information
if (!segmentInfo.playlist.syncInfo) {
segmentInfo.playlist.syncInfo = {
mediaSequence: segmentInfo.playlist.mediaSequence + segmentInfo.mediaIndex,
time: segment.start
};
}
}
const dateTime = segment.dateTimeObject;
if (segment.discontinuity && shouldSaveTimelineMapping && dateTime) {
this.timelineToDatetimeMappings[segment.timeline] = -(dateTime.getTime() / 1000);
}
}
timestampOffsetForTimeline(timeline) {
if (typeof this.timelines[timeline] === 'undefined') {
return null;
}
return this.timelines[timeline].time;
}
mappingForTimeline(timeline) {
if (typeof this.timelines[timeline] === 'undefined') {
return null;
}
return this.timelines[timeline].mapping;
}
/**
* Use the "media time" for a segment to generate a mapping to "display time" and
* save that display time to the segment.
*
* @private
* @param {SegmentInfo} segmentInfo
* The current active request information
* @param {Object} timingInfo
* The start and end time of the current segment in "media time"
* @param {boolean} shouldSaveTimelineMapping
* If there's a timeline change, determines if the timeline mapping should be
* saved in timelines.
* @return {boolean}
* Returns false if segment time mapping could not be calculated
*/
calculateSegmentTimeMapping_(segmentInfo, timingInfo, shouldSaveTimelineMapping) {
// TODO: remove side effects
const segment = segmentInfo.segment;
const part = segmentInfo.part;
let mappingObj = this.timelines[segmentInfo.timeline];
let start;
let end;
if (typeof segmentInfo.timestampOffset === 'number') {
mappingObj = {
time: segmentInfo.startOfSegment,
mapping: segmentInfo.startOfSegment - timingInfo.start
};
if (shouldSaveTimelineMapping) {
this.timelines[segmentInfo.timeline] = mappingObj;
this.trigger('timestampoffset');
this.logger_(`time mapping for timeline ${segmentInfo.timeline}: ` +
`[time: ${mappingObj.time}] [mapping: ${mappingObj.mapping}]`);
}
start = segmentInfo.startOfSegment;
end = timingInfo.end + mappingObj.mapping;
} else if (mappingObj) {
start = timingInfo.start + mappingObj.mapping;
end = timingInfo.end + mappingObj.mapping;
} else {
return false;
}
if (part) {
part.start = start;
part.end = end;
}
// If we don't have a segment start yet or the start value we got
// is less than our current segment.start value, save a new start value.
// We have to do this because parts will have segment timing info saved
// multiple times and we want segment start to be the earliest part start
// value for that segment.
if (!segment.start || start < segment.start) {
segment.start = start;
}
segment.end = end;
return true;
}
/**
* Each time we have discontinuity in the playlist, attempt to calculate the location
* in display of the start of the discontinuity and save that. We also save an accuracy
* value so that we save values with the most accuracy (closest to 0.)
*
* @private
* @param {SegmentInfo} segmentInfo - The current active request information
*/
saveDiscontinuitySyncInfo_(segmentInfo) {
const playlist = segmentInfo.playlist;
const segment = segmentInfo.segment;
// If the current segment is a discontinuity then we know exactly where
// the start of the range and it's accuracy is 0 (greater accuracy values
// mean more approximation)
if (segment.discontinuity) {
this.discontinuities[segment.timeline] = {
time: segment.start,
accuracy: 0
};
} else if (playlist.discontinuityStarts && playlist.discontinuityStarts.length) {
// Search for future discontinuities that we can provide better timing
// information for and save that information for sync purposes
for (let i = 0; i < playlist.discontinuityStarts.length; i++) {
const segmentIndex = playlist.discontinuityStarts[i];
const discontinuity = playlist.discontinuitySequence + i + 1;
const mediaIndexDiff = segmentIndex - segmentInfo.mediaIndex;
const accuracy = Math.abs(mediaIndexDiff);
if (!this.discontinuities[discontinuity] ||
this.discontinuities[discontinuity].accuracy > accuracy) {
let time;
if (mediaIndexDiff < 0) {
time = segment.start - sumDurations({
defaultDuration: playlist.targetDuration,
durationList: playlist.segments,
startIndex: segmentInfo.mediaIndex,
endIndex: segmentIndex
});
} else {
time = segment.end + sumDurations({
defaultDuration: playlist.targetDuration,
durationList: playlist.segments,
startIndex: segmentInfo.mediaIndex + 1,
endIndex: segmentIndex
});
}
this.discontinuities[discontinuity] = {
time,
accuracy
};
}
}
}
}
dispose() {
this.trigger('dispose');
this.off();
}
}

View File

@ -0,0 +1,55 @@
import videojs from 'video.js';
/**
* The TimelineChangeController acts as a source for segment loaders to listen for and
* keep track of latest and pending timeline changes. This is useful to ensure proper
* sync, as each loader may need to make a consideration for what timeline the other
* loader is on before making changes which could impact the other loader's media.
*
* @class TimelineChangeController
* @extends videojs.EventTarget
*/
export default class TimelineChangeController extends videojs.EventTarget {
constructor() {
super();
this.pendingTimelineChanges_ = {};
this.lastTimelineChanges_ = {};
}
clearPendingTimelineChange(type) {
this.pendingTimelineChanges_[type] = null;
this.trigger('pendingtimelinechange');
}
pendingTimelineChange({ type, from, to }) {
if (typeof from === 'number' && typeof to === 'number') {
this.pendingTimelineChanges_[type] = { type, from, to };
this.trigger('pendingtimelinechange');
}
return this.pendingTimelineChanges_[type];
}
lastTimelineChange({ type, from, to }) {
if (typeof from === 'number' && typeof to === 'number') {
this.lastTimelineChanges_[type] = { type, from, to };
delete this.pendingTimelineChanges_[type];
const metadata = {
timelineChangeInfo: {
from,
to
}
};
this.trigger({ type: 'timelinechange', metadata });
}
return this.lastTimelineChanges_[type];
}
dispose() {
this.trigger('dispose');
this.pendingTimelineChanges_ = {};
this.lastTimelineChanges_ = {};
this.off();
}
}

View File

@ -0,0 +1,435 @@
/* global self */
/**
* @file transmuxer-worker.js
*/
/**
* videojs-contrib-media-sources
*
* Copyright (c) 2015 Brightcove
* All rights reserved.
*
* Handles communication between the browser-world and the mux.js
* transmuxer running inside of a WebWorker by exposing a simple
* message-based interface to a Transmuxer object.
*/
import {Transmuxer} from 'mux.js/lib/mp4/transmuxer';
import CaptionParser from 'mux.js/lib/mp4/caption-parser';
import WebVttParser from 'mux.js/lib/mp4/webvtt-parser';
import mp4probe from 'mux.js/lib/mp4/probe';
import tsInspector from 'mux.js/lib/tools/ts-inspector.js';
import {
ONE_SECOND_IN_TS,
secondsToVideoTs,
videoTsToSeconds
} from 'mux.js/lib/utils/clock';
/**
* Re-emits transmuxer events by converting them into messages to the
* world outside the worker.
*
* @param {Object} transmuxer the transmuxer to wire events on
* @private
*/
const wireTransmuxerEvents = function(self, transmuxer) {
transmuxer.on('data', function(segment) {
// transfer ownership of the underlying ArrayBuffer
// instead of doing a copy to save memory
// ArrayBuffers are transferable but generic TypedArrays are not
// @link https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers#Passing_data_by_transferring_ownership_(transferable_objects)
const initArray = segment.initSegment;
segment.initSegment = {
data: initArray.buffer,
byteOffset: initArray.byteOffset,
byteLength: initArray.byteLength
};
const typedArray = segment.data;
segment.data = typedArray.buffer;
self.postMessage({
action: 'data',
segment,
byteOffset: typedArray.byteOffset,
byteLength: typedArray.byteLength
}, [segment.data]);
});
transmuxer.on('done', function(data) {
self.postMessage({ action: 'done' });
});
transmuxer.on('gopInfo', function(gopInfo) {
self.postMessage({
action: 'gopInfo',
gopInfo
});
});
transmuxer.on('videoSegmentTimingInfo', function(timingInfo) {
const videoSegmentTimingInfo = {
start: {
decode: videoTsToSeconds(timingInfo.start.dts),
presentation: videoTsToSeconds(timingInfo.start.pts)
},
end: {
decode: videoTsToSeconds(timingInfo.end.dts),
presentation: videoTsToSeconds(timingInfo.end.pts)
},
baseMediaDecodeTime: videoTsToSeconds(timingInfo.baseMediaDecodeTime)
};
if (timingInfo.prependedContentDuration) {
videoSegmentTimingInfo.prependedContentDuration = videoTsToSeconds(timingInfo.prependedContentDuration);
}
self.postMessage({
action: 'videoSegmentTimingInfo',
videoSegmentTimingInfo
});
});
transmuxer.on('audioSegmentTimingInfo', function(timingInfo) {
// Note that all times for [audio/video]SegmentTimingInfo events are in video clock
const audioSegmentTimingInfo = {
start: {
decode: videoTsToSeconds(timingInfo.start.dts),
presentation: videoTsToSeconds(timingInfo.start.pts)
},
end: {
decode: videoTsToSeconds(timingInfo.end.dts),
presentation: videoTsToSeconds(timingInfo.end.pts)
},
baseMediaDecodeTime: videoTsToSeconds(timingInfo.baseMediaDecodeTime)
};
if (timingInfo.prependedContentDuration) {
audioSegmentTimingInfo.prependedContentDuration =
videoTsToSeconds(timingInfo.prependedContentDuration);
}
self.postMessage({
action: 'audioSegmentTimingInfo',
audioSegmentTimingInfo
});
});
transmuxer.on('id3Frame', function(id3Frame) {
self.postMessage({
action: 'id3Frame',
id3Frame
});
});
transmuxer.on('caption', function(caption) {
self.postMessage({
action: 'caption',
caption
});
});
transmuxer.on('trackinfo', function(trackInfo) {
self.postMessage({
action: 'trackinfo',
trackInfo
});
});
transmuxer.on('audioTimingInfo', function(audioTimingInfo) {
// convert to video TS since we prioritize video time over audio
self.postMessage({
action: 'audioTimingInfo',
audioTimingInfo: {
start: videoTsToSeconds(audioTimingInfo.start),
end: videoTsToSeconds(audioTimingInfo.end)
}
});
});
transmuxer.on('videoTimingInfo', function(videoTimingInfo) {
self.postMessage({
action: 'videoTimingInfo',
videoTimingInfo: {
start: videoTsToSeconds(videoTimingInfo.start),
end: videoTsToSeconds(videoTimingInfo.end)
}
});
});
transmuxer.on('log', function(log) {
self.postMessage({action: 'log', log});
});
};
/**
* All incoming messages route through this hash. If no function exists
* to handle an incoming message, then we ignore the message.
*
* @class MessageHandlers
* @param {Object} options the options to initialize with
*/
class MessageHandlers {
constructor(self, options) {
this.options = options || {};
this.self = self;
this.init();
}
/**
* initialize our web worker and wire all the events.
*/
init() {
if (this.transmuxer) {
this.transmuxer.dispose();
}
this.transmuxer = new Transmuxer(this.options);
wireTransmuxerEvents(this.self, this.transmuxer);
}
pushMp4Captions(data) {
if (!this.captionParser) {
this.captionParser = new CaptionParser();
this.captionParser.init();
}
const segment = new Uint8Array(data.data, data.byteOffset, data.byteLength);
const parsed = this.captionParser.parse(
segment,
data.trackIds,
data.timescales
);
this.self.postMessage({
action: 'mp4Captions',
captions: parsed && parsed.captions || [],
logs: parsed && parsed.logs || [],
data: segment.buffer
}, [segment.buffer]);
}
/**
* Initializes the WebVttParser and passes the init segment.
*
* @param {Uint8Array} data mp4 boxed WebVTT init segment data
*/
initMp4WebVttParser(data) {
if (!this.webVttParser) {
this.webVttParser = new WebVttParser();
}
const segment = new Uint8Array(data.data, data.byteOffset, data.byteLength);
// Set the timescale for the parser.
// This can be called repeatedly in order to set and re-set the timescale.
this.webVttParser.init(segment);
}
/**
* Parse an mp4 encapsulated WebVTT segment and return an array of cues.
*
* @param {Uint8Array} data a text/webvtt segment
* @return {Object[]} an array of parsed cue objects
*/
getMp4WebVttText(data) {
if (!this.webVttParser) {
// timescale might not be set yet if the parser is created before an init segment is passed.
// default timescale is 90k.
this.webVttParser = new WebVttParser();
}
const segment = new Uint8Array(data.data, data.byteOffset, data.byteLength);
const parsed = this.webVttParser.parseSegment(segment);
this.self.postMessage({
action: 'getMp4WebVttText',
mp4VttCues: parsed || [],
data: segment.buffer
}, [segment.buffer]);
}
probeMp4StartTime({timescales, data}) {
const startTime = mp4probe.startTime(timescales, data);
this.self.postMessage({
action: 'probeMp4StartTime',
startTime,
data
}, [data.buffer]);
}
probeMp4Tracks({data}) {
const tracks = mp4probe.tracks(data);
this.self.postMessage({
action: 'probeMp4Tracks',
tracks,
data
}, [data.buffer]);
}
/**
* Probes an mp4 segment for EMSG boxes containing ID3 data.
* https://aomediacodec.github.io/id3-emsg/
*
* @param {Uint8Array} data segment data
* @param {number} offset segment start time
* @return {Object[]} an array of ID3 frames
*/
probeEmsgID3({data, offset}) {
const id3Frames = mp4probe.getEmsgID3(data, offset);
this.self.postMessage({
action: 'probeEmsgID3',
id3Frames,
emsgData: data
}, [data.buffer]);
}
/**
* Probe an mpeg2-ts segment to determine the start time of the segment in it's
* internal "media time," as well as whether it contains video and/or audio.
*
* @private
* @param {Uint8Array} bytes - segment bytes
* @param {number} baseStartTime
* Relative reference timestamp used when adjusting frame timestamps for rollover.
* This value should be in seconds, as it's converted to a 90khz clock within the
* function body.
* @return {Object} The start time of the current segment in "media time" as well as
* whether it contains video and/or audio
*/
probeTs({data, baseStartTime}) {
const tsStartTime = (typeof baseStartTime === 'number' && !isNaN(baseStartTime)) ?
(baseStartTime * ONE_SECOND_IN_TS) :
void 0;
const timeInfo = tsInspector.inspect(data, tsStartTime);
let result = null;
if (timeInfo) {
result = {
// each type's time info comes back as an array of 2 times, start and end
hasVideo: timeInfo.video && timeInfo.video.length === 2 || false,
hasAudio: timeInfo.audio && timeInfo.audio.length === 2 || false
};
if (result.hasVideo) {
result.videoStart = timeInfo.video[0].ptsTime;
}
if (result.hasAudio) {
result.audioStart = timeInfo.audio[0].ptsTime;
}
}
this.self.postMessage({
action: 'probeTs',
result,
data
}, [data.buffer]);
}
clearAllMp4Captions() {
if (this.captionParser) {
this.captionParser.clearAllCaptions();
}
}
clearParsedMp4Captions() {
if (this.captionParser) {
this.captionParser.clearParsedCaptions();
}
}
/**
* Adds data (a ts segment) to the start of the transmuxer pipeline for
* processing.
*
* @param {ArrayBuffer} data data to push into the muxer
*/
push(data) {
// Cast array buffer to correct type for transmuxer
const segment = new Uint8Array(data.data, data.byteOffset, data.byteLength);
this.transmuxer.push(segment);
}
/**
* Recreate the transmuxer so that the next segment added via `push`
* start with a fresh transmuxer.
*/
reset() {
this.transmuxer.reset();
}
/**
* Set the value that will be used as the `baseMediaDecodeTime` time for the
* next segment pushed in. Subsequent segments will have their `baseMediaDecodeTime`
* set relative to the first based on the PTS values.
*
* @param {Object} data used to set the timestamp offset in the muxer
*/
setTimestampOffset(data) {
const timestampOffset = data.timestampOffset || 0;
this.transmuxer.setBaseMediaDecodeTime(Math.round(secondsToVideoTs(timestampOffset)));
}
setAudioAppendStart(data) {
this.transmuxer.setAudioAppendStart(Math.ceil(secondsToVideoTs(data.appendStart)));
}
setRemux(data) {
this.transmuxer.setRemux(data.remux);
}
/**
* Forces the pipeline to finish processing the last segment and emit it's
* results.
*
* @param {Object} data event data, not really used
*/
flush(data) {
this.transmuxer.flush();
// transmuxed done action is fired after both audio/video pipelines are flushed
self.postMessage({
action: 'done',
type: 'transmuxed'
});
}
endTimeline() {
this.transmuxer.endTimeline();
// transmuxed endedtimeline action is fired after both audio/video pipelines end their
// timelines
self.postMessage({
action: 'endedtimeline',
type: 'transmuxed'
});
}
alignGopsWith(data) {
this.transmuxer.alignGopsWith(data.gopsToAlignWith.slice());
}
}
/**
* Our web worker interface so that things can talk to mux.js
* that will be running in a web worker. the scope is passed to this by
* webworkify.
*
* @param {Object} self the scope for the web worker
*/
self.onmessage = function(event) {
if (event.data.action === 'init' && event.data.options) {
this.messageHandlers = new MessageHandlers(self, event.data.options);
return;
}
if (!this.messageHandlers) {
this.messageHandlers = new MessageHandlers(self);
}
if (event.data && event.data.action && event.data.action !== 'init') {
if (this.messageHandlers[event.data.action]) {
this.messageHandlers[event.data.action](event.data);
}
}
};

Some files were not shown because too many files have changed in this diff Show More