mirror of
https://github.com/yt-dlp/yt-dlp.git
synced 2024-11-20 05:47:24 +01:00
Merge branch 'master' into valid-urls-proposal
This commit is contained in:
commit
87c8703f90
20
.github/ISSUE_TEMPLATE/1_broken_site.yml
vendored
20
.github/ISSUE_TEMPLATE/1_broken_site.yml
vendored
@ -1,5 +1,5 @@
|
|||||||
name: Broken site
|
name: Broken site support
|
||||||
description: Report broken or misfunctioning site
|
description: Report issue with yt-dlp on a supported site
|
||||||
labels: [triage, site-bug]
|
labels: [triage, site-bug]
|
||||||
body:
|
body:
|
||||||
- type: checkboxes
|
- type: checkboxes
|
||||||
@ -7,7 +7,7 @@ body:
|
|||||||
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
||||||
description: Fill all fields even if you think it is irrelevant for the issue
|
description: Fill all fields even if you think it is irrelevant for the issue
|
||||||
options:
|
options:
|
||||||
- label: I understand that I will be **blocked** if I remove or skip any mandatory\* field
|
- label: I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
|
||||||
required: true
|
required: true
|
||||||
- type: checkboxes
|
- type: checkboxes
|
||||||
id: checklist
|
id: checklist
|
||||||
@ -16,15 +16,15 @@ body:
|
|||||||
description: |
|
description: |
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
|
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
|
||||||
options:
|
options:
|
||||||
- label: I'm reporting a broken site
|
- label: I'm reporting that yt-dlp is broken on a **supported** site
|
||||||
required: true
|
required: true
|
||||||
- label: I've verified that I'm running yt-dlp version **2023.01.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
- label: I've verified that I'm running yt-dlp version **2023.06.22** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
||||||
required: true
|
required: true
|
||||||
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
||||||
required: true
|
required: true
|
||||||
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
|
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
@ -50,6 +50,8 @@ body:
|
|||||||
options:
|
options:
|
||||||
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
||||||
required: true
|
required: true
|
||||||
|
- label: "If using API, add `'verbose': True` to `YoutubeDL` params instead"
|
||||||
|
required: false
|
||||||
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
||||||
required: true
|
required: true
|
||||||
- type: textarea
|
- type: textarea
|
||||||
@ -62,7 +64,7 @@ body:
|
|||||||
[debug] Command-line config: ['-vU', 'test:youtube']
|
[debug] Command-line config: ['-vU', 'test:youtube']
|
||||||
[debug] Portable config "yt-dlp.conf": ['-i']
|
[debug] Portable config "yt-dlp.conf": ['-i']
|
||||||
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
||||||
[debug] yt-dlp version 2023.01.06 [9d339c4] (win32_exe)
|
[debug] yt-dlp version 2023.06.22 [9d339c4] (win32_exe)
|
||||||
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
||||||
[debug] Checking exe version: ffmpeg -bsfs
|
[debug] Checking exe version: ffmpeg -bsfs
|
||||||
[debug] Checking exe version: ffprobe -bsfs
|
[debug] Checking exe version: ffprobe -bsfs
|
||||||
@ -70,8 +72,8 @@ body:
|
|||||||
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
||||||
[debug] Proxy map: {}
|
[debug] Proxy map: {}
|
||||||
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
||||||
Latest version: 2023.01.06, Current version: 2023.01.06
|
Latest version: 2023.06.22, Current version: 2023.06.22
|
||||||
yt-dlp is up to date (2023.01.06)
|
yt-dlp is up to date (2023.06.22)
|
||||||
<more lines>
|
<more lines>
|
||||||
render: shell
|
render: shell
|
||||||
validations:
|
validations:
|
||||||
|
@ -7,7 +7,7 @@ body:
|
|||||||
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
||||||
description: Fill all fields even if you think it is irrelevant for the issue
|
description: Fill all fields even if you think it is irrelevant for the issue
|
||||||
options:
|
options:
|
||||||
- label: I understand that I will be **blocked** if I remove or skip any mandatory\* field
|
- label: I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
|
||||||
required: true
|
required: true
|
||||||
- type: checkboxes
|
- type: checkboxes
|
||||||
id: checklist
|
id: checklist
|
||||||
@ -18,13 +18,13 @@ body:
|
|||||||
options:
|
options:
|
||||||
- label: I'm reporting a new site support request
|
- label: I'm reporting a new site support request
|
||||||
required: true
|
required: true
|
||||||
- label: I've verified that I'm running yt-dlp version **2023.01.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
- label: I've verified that I'm running yt-dlp version **2023.06.22** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
||||||
required: true
|
required: true
|
||||||
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
||||||
required: true
|
required: true
|
||||||
- label: I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
|
- label: I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
@ -62,6 +62,8 @@ body:
|
|||||||
options:
|
options:
|
||||||
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
||||||
required: true
|
required: true
|
||||||
|
- label: "If using API, add `'verbose': True` to `YoutubeDL` params instead"
|
||||||
|
required: false
|
||||||
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
||||||
required: true
|
required: true
|
||||||
- type: textarea
|
- type: textarea
|
||||||
@ -74,7 +76,7 @@ body:
|
|||||||
[debug] Command-line config: ['-vU', 'test:youtube']
|
[debug] Command-line config: ['-vU', 'test:youtube']
|
||||||
[debug] Portable config "yt-dlp.conf": ['-i']
|
[debug] Portable config "yt-dlp.conf": ['-i']
|
||||||
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
||||||
[debug] yt-dlp version 2023.01.06 [9d339c4] (win32_exe)
|
[debug] yt-dlp version 2023.06.22 [9d339c4] (win32_exe)
|
||||||
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
||||||
[debug] Checking exe version: ffmpeg -bsfs
|
[debug] Checking exe version: ffmpeg -bsfs
|
||||||
[debug] Checking exe version: ffprobe -bsfs
|
[debug] Checking exe version: ffprobe -bsfs
|
||||||
@ -82,8 +84,8 @@ body:
|
|||||||
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
||||||
[debug] Proxy map: {}
|
[debug] Proxy map: {}
|
||||||
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
||||||
Latest version: 2023.01.06, Current version: 2023.01.06
|
Latest version: 2023.06.22, Current version: 2023.06.22
|
||||||
yt-dlp is up to date (2023.01.06)
|
yt-dlp is up to date (2023.06.22)
|
||||||
<more lines>
|
<more lines>
|
||||||
render: shell
|
render: shell
|
||||||
validations:
|
validations:
|
||||||
|
@ -7,7 +7,7 @@ body:
|
|||||||
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
||||||
description: Fill all fields even if you think it is irrelevant for the issue
|
description: Fill all fields even if you think it is irrelevant for the issue
|
||||||
options:
|
options:
|
||||||
- label: I understand that I will be **blocked** if I remove or skip any mandatory\* field
|
- label: I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
|
||||||
required: true
|
required: true
|
||||||
- type: checkboxes
|
- type: checkboxes
|
||||||
id: checklist
|
id: checklist
|
||||||
@ -18,11 +18,11 @@ body:
|
|||||||
options:
|
options:
|
||||||
- label: I'm requesting a site-specific feature
|
- label: I'm requesting a site-specific feature
|
||||||
required: true
|
required: true
|
||||||
- label: I've verified that I'm running yt-dlp version **2023.01.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
- label: I've verified that I'm running yt-dlp version **2023.06.22** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
||||||
required: true
|
required: true
|
||||||
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
@ -58,6 +58,8 @@ body:
|
|||||||
options:
|
options:
|
||||||
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
||||||
required: true
|
required: true
|
||||||
|
- label: "If using API, add `'verbose': True` to `YoutubeDL` params instead"
|
||||||
|
required: false
|
||||||
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
||||||
required: true
|
required: true
|
||||||
- type: textarea
|
- type: textarea
|
||||||
@ -70,7 +72,7 @@ body:
|
|||||||
[debug] Command-line config: ['-vU', 'test:youtube']
|
[debug] Command-line config: ['-vU', 'test:youtube']
|
||||||
[debug] Portable config "yt-dlp.conf": ['-i']
|
[debug] Portable config "yt-dlp.conf": ['-i']
|
||||||
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
||||||
[debug] yt-dlp version 2023.01.06 [9d339c4] (win32_exe)
|
[debug] yt-dlp version 2023.06.22 [9d339c4] (win32_exe)
|
||||||
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
||||||
[debug] Checking exe version: ffmpeg -bsfs
|
[debug] Checking exe version: ffmpeg -bsfs
|
||||||
[debug] Checking exe version: ffprobe -bsfs
|
[debug] Checking exe version: ffprobe -bsfs
|
||||||
@ -78,8 +80,8 @@ body:
|
|||||||
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
||||||
[debug] Proxy map: {}
|
[debug] Proxy map: {}
|
||||||
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
||||||
Latest version: 2023.01.06, Current version: 2023.01.06
|
Latest version: 2023.06.22, Current version: 2023.06.22
|
||||||
yt-dlp is up to date (2023.01.06)
|
yt-dlp is up to date (2023.06.22)
|
||||||
<more lines>
|
<more lines>
|
||||||
render: shell
|
render: shell
|
||||||
validations:
|
validations:
|
||||||
|
16
.github/ISSUE_TEMPLATE/4_bug_report.yml
vendored
16
.github/ISSUE_TEMPLATE/4_bug_report.yml
vendored
@ -1,4 +1,4 @@
|
|||||||
name: Bug report
|
name: Core bug report
|
||||||
description: Report a bug unrelated to any particular site or extractor
|
description: Report a bug unrelated to any particular site or extractor
|
||||||
labels: [triage, bug]
|
labels: [triage, bug]
|
||||||
body:
|
body:
|
||||||
@ -7,7 +7,7 @@ body:
|
|||||||
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
||||||
description: Fill all fields even if you think it is irrelevant for the issue
|
description: Fill all fields even if you think it is irrelevant for the issue
|
||||||
options:
|
options:
|
||||||
- label: I understand that I will be **blocked** if I remove or skip any mandatory\* field
|
- label: I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
|
||||||
required: true
|
required: true
|
||||||
- type: checkboxes
|
- type: checkboxes
|
||||||
id: checklist
|
id: checklist
|
||||||
@ -18,13 +18,13 @@ body:
|
|||||||
options:
|
options:
|
||||||
- label: I'm reporting a bug unrelated to a specific site
|
- label: I'm reporting a bug unrelated to a specific site
|
||||||
required: true
|
required: true
|
||||||
- label: I've verified that I'm running yt-dlp version **2023.01.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
- label: I've verified that I'm running yt-dlp version **2023.06.22** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
||||||
required: true
|
required: true
|
||||||
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
||||||
required: true
|
required: true
|
||||||
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
|
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
@ -43,6 +43,8 @@ body:
|
|||||||
options:
|
options:
|
||||||
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
||||||
required: true
|
required: true
|
||||||
|
- label: "If using API, add `'verbose': True` to `YoutubeDL` params instead"
|
||||||
|
required: false
|
||||||
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
||||||
required: true
|
required: true
|
||||||
- type: textarea
|
- type: textarea
|
||||||
@ -55,7 +57,7 @@ body:
|
|||||||
[debug] Command-line config: ['-vU', 'test:youtube']
|
[debug] Command-line config: ['-vU', 'test:youtube']
|
||||||
[debug] Portable config "yt-dlp.conf": ['-i']
|
[debug] Portable config "yt-dlp.conf": ['-i']
|
||||||
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
||||||
[debug] yt-dlp version 2023.01.06 [9d339c4] (win32_exe)
|
[debug] yt-dlp version 2023.06.22 [9d339c4] (win32_exe)
|
||||||
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
||||||
[debug] Checking exe version: ffmpeg -bsfs
|
[debug] Checking exe version: ffmpeg -bsfs
|
||||||
[debug] Checking exe version: ffprobe -bsfs
|
[debug] Checking exe version: ffprobe -bsfs
|
||||||
@ -63,8 +65,8 @@ body:
|
|||||||
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
||||||
[debug] Proxy map: {}
|
[debug] Proxy map: {}
|
||||||
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
||||||
Latest version: 2023.01.06, Current version: 2023.01.06
|
Latest version: 2023.06.22, Current version: 2023.06.22
|
||||||
yt-dlp is up to date (2023.01.06)
|
yt-dlp is up to date (2023.06.22)
|
||||||
<more lines>
|
<more lines>
|
||||||
render: shell
|
render: shell
|
||||||
validations:
|
validations:
|
||||||
|
14
.github/ISSUE_TEMPLATE/5_feature_request.yml
vendored
14
.github/ISSUE_TEMPLATE/5_feature_request.yml
vendored
@ -7,7 +7,7 @@ body:
|
|||||||
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
||||||
description: Fill all fields even if you think it is irrelevant for the issue
|
description: Fill all fields even if you think it is irrelevant for the issue
|
||||||
options:
|
options:
|
||||||
- label: I understand that I will be **blocked** if I remove or skip any mandatory\* field
|
- label: I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
|
||||||
required: true
|
required: true
|
||||||
- type: checkboxes
|
- type: checkboxes
|
||||||
id: checklist
|
id: checklist
|
||||||
@ -20,9 +20,9 @@ body:
|
|||||||
required: true
|
required: true
|
||||||
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
|
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
|
||||||
required: true
|
required: true
|
||||||
- label: I've verified that I'm running yt-dlp version **2023.01.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
- label: I've verified that I'm running yt-dlp version **2023.06.22** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
@ -40,6 +40,8 @@ body:
|
|||||||
label: Provide verbose output that clearly demonstrates the problem
|
label: Provide verbose output that clearly demonstrates the problem
|
||||||
options:
|
options:
|
||||||
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
||||||
|
- label: "If using API, add `'verbose': True` to `YoutubeDL` params instead"
|
||||||
|
required: false
|
||||||
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
||||||
- type: textarea
|
- type: textarea
|
||||||
id: log
|
id: log
|
||||||
@ -51,7 +53,7 @@ body:
|
|||||||
[debug] Command-line config: ['-vU', 'test:youtube']
|
[debug] Command-line config: ['-vU', 'test:youtube']
|
||||||
[debug] Portable config "yt-dlp.conf": ['-i']
|
[debug] Portable config "yt-dlp.conf": ['-i']
|
||||||
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
||||||
[debug] yt-dlp version 2023.01.06 [9d339c4] (win32_exe)
|
[debug] yt-dlp version 2023.06.22 [9d339c4] (win32_exe)
|
||||||
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
||||||
[debug] Checking exe version: ffmpeg -bsfs
|
[debug] Checking exe version: ffmpeg -bsfs
|
||||||
[debug] Checking exe version: ffprobe -bsfs
|
[debug] Checking exe version: ffprobe -bsfs
|
||||||
@ -59,7 +61,7 @@ body:
|
|||||||
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
||||||
[debug] Proxy map: {}
|
[debug] Proxy map: {}
|
||||||
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
||||||
Latest version: 2023.01.06, Current version: 2023.01.06
|
Latest version: 2023.06.22, Current version: 2023.06.22
|
||||||
yt-dlp is up to date (2023.01.06)
|
yt-dlp is up to date (2023.06.22)
|
||||||
<more lines>
|
<more lines>
|
||||||
render: shell
|
render: shell
|
||||||
|
14
.github/ISSUE_TEMPLATE/6_question.yml
vendored
14
.github/ISSUE_TEMPLATE/6_question.yml
vendored
@ -7,7 +7,7 @@ body:
|
|||||||
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
||||||
description: Fill all fields even if you think it is irrelevant for the issue
|
description: Fill all fields even if you think it is irrelevant for the issue
|
||||||
options:
|
options:
|
||||||
- label: I understand that I will be **blocked** if I remove or skip any mandatory\* field
|
- label: I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
|
||||||
required: true
|
required: true
|
||||||
- type: markdown
|
- type: markdown
|
||||||
attributes:
|
attributes:
|
||||||
@ -26,9 +26,9 @@ body:
|
|||||||
required: true
|
required: true
|
||||||
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
|
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
|
||||||
required: true
|
required: true
|
||||||
- label: I've verified that I'm running yt-dlp version **2023.01.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
- label: I've verified that I'm running yt-dlp version **2023.06.22** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
@ -46,6 +46,8 @@ body:
|
|||||||
label: Provide verbose output that clearly demonstrates the problem
|
label: Provide verbose output that clearly demonstrates the problem
|
||||||
options:
|
options:
|
||||||
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
||||||
|
- label: "If using API, add `'verbose': True` to `YoutubeDL` params instead"
|
||||||
|
required: false
|
||||||
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
||||||
- type: textarea
|
- type: textarea
|
||||||
id: log
|
id: log
|
||||||
@ -57,7 +59,7 @@ body:
|
|||||||
[debug] Command-line config: ['-vU', 'test:youtube']
|
[debug] Command-line config: ['-vU', 'test:youtube']
|
||||||
[debug] Portable config "yt-dlp.conf": ['-i']
|
[debug] Portable config "yt-dlp.conf": ['-i']
|
||||||
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
||||||
[debug] yt-dlp version 2023.01.06 [9d339c4] (win32_exe)
|
[debug] yt-dlp version 2023.06.22 [9d339c4] (win32_exe)
|
||||||
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
||||||
[debug] Checking exe version: ffmpeg -bsfs
|
[debug] Checking exe version: ffmpeg -bsfs
|
||||||
[debug] Checking exe version: ffprobe -bsfs
|
[debug] Checking exe version: ffprobe -bsfs
|
||||||
@ -65,7 +67,7 @@ body:
|
|||||||
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
||||||
[debug] Proxy map: {}
|
[debug] Proxy map: {}
|
||||||
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
||||||
Latest version: 2023.01.06, Current version: 2023.01.06
|
Latest version: 2023.06.22, Current version: 2023.06.22
|
||||||
yt-dlp is up to date (2023.01.06)
|
yt-dlp is up to date (2023.06.22)
|
||||||
<more lines>
|
<more lines>
|
||||||
render: shell
|
render: shell
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
name: Broken site
|
name: Broken site support
|
||||||
description: Report broken or misfunctioning site
|
description: Report issue with yt-dlp on a supported site
|
||||||
labels: [triage, site-bug]
|
labels: [triage, site-bug]
|
||||||
body:
|
body:
|
||||||
%(no_skip)s
|
%(no_skip)s
|
||||||
@ -10,7 +10,7 @@ body:
|
|||||||
description: |
|
description: |
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
|
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
|
||||||
options:
|
options:
|
||||||
- label: I'm reporting a broken site
|
- label: I'm reporting that yt-dlp is broken on a **supported** site
|
||||||
required: true
|
required: true
|
||||||
- label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
- label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
||||||
required: true
|
required: true
|
||||||
@ -18,7 +18,7 @@ body:
|
|||||||
required: true
|
required: true
|
||||||
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
|
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
|
@ -18,7 +18,7 @@ body:
|
|||||||
required: true
|
required: true
|
||||||
- label: I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
|
- label: I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
|
@ -16,7 +16,7 @@ body:
|
|||||||
required: true
|
required: true
|
||||||
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
|
4
.github/ISSUE_TEMPLATE_tmpl/4_bug_report.yml
vendored
4
.github/ISSUE_TEMPLATE_tmpl/4_bug_report.yml
vendored
@ -1,4 +1,4 @@
|
|||||||
name: Bug report
|
name: Core bug report
|
||||||
description: Report a bug unrelated to any particular site or extractor
|
description: Report a bug unrelated to any particular site or extractor
|
||||||
labels: [triage, bug]
|
labels: [triage, bug]
|
||||||
body:
|
body:
|
||||||
@ -18,7 +18,7 @@ body:
|
|||||||
required: true
|
required: true
|
||||||
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
|
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
|
@ -16,7 +16,7 @@ body:
|
|||||||
required: true
|
required: true
|
||||||
- label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
- label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
|
2
.github/ISSUE_TEMPLATE_tmpl/6_question.yml
vendored
2
.github/ISSUE_TEMPLATE_tmpl/6_question.yml
vendored
@ -22,7 +22,7 @@ body:
|
|||||||
required: true
|
required: true
|
||||||
- label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
- label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
|
8
.github/PULL_REQUEST_TEMPLATE.md
vendored
8
.github/PULL_REQUEST_TEMPLATE.md
vendored
@ -30,7 +30,7 @@ ### Before submitting a *pull request* make sure you have:
|
|||||||
- [ ] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
|
- [ ] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
|
||||||
- [ ] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) and [ran relevant tests](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions)
|
- [ ] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) and [ran relevant tests](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions)
|
||||||
|
|
||||||
### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
|
### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check all of the following options that apply:
|
||||||
- [ ] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
|
- [ ] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
|
||||||
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
|
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
|
||||||
|
|
||||||
@ -40,4 +40,10 @@ ### What is the purpose of your *pull request*?
|
|||||||
- [ ] Core bug fix/improvement
|
- [ ] Core bug fix/improvement
|
||||||
- [ ] New feature (It is strongly [recommended to open an issue first](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#adding-new-feature-or-making-overarching-changes))
|
- [ ] New feature (It is strongly [recommended to open an issue first](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#adding-new-feature-or-making-overarching-changes))
|
||||||
|
|
||||||
|
|
||||||
|
<!-- Do NOT edit/remove anything below this! -->
|
||||||
|
</details><details><summary>Copilot Summary</summary>
|
||||||
|
|
||||||
|
copilot:all
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
635
.github/workflows/build.yml
vendored
635
.github/workflows/build.yml
vendored
@ -1,393 +1,428 @@
|
|||||||
name: Build
|
name: Build Artifacts
|
||||||
on: workflow_dispatch
|
on:
|
||||||
|
workflow_call:
|
||||||
|
inputs:
|
||||||
|
version:
|
||||||
|
required: true
|
||||||
|
type: string
|
||||||
|
channel:
|
||||||
|
required: false
|
||||||
|
default: stable
|
||||||
|
type: string
|
||||||
|
unix:
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
linux_arm:
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
macos:
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
macos_legacy:
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
windows:
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
windows32:
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
meta_files:
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
secrets:
|
||||||
|
GPG_SIGNING_KEY:
|
||||||
|
required: false
|
||||||
|
|
||||||
|
workflow_dispatch:
|
||||||
|
inputs:
|
||||||
|
version:
|
||||||
|
description: Version tag (YYYY.MM.DD[.REV])
|
||||||
|
required: true
|
||||||
|
type: string
|
||||||
|
channel:
|
||||||
|
description: Update channel (stable/nightly/...)
|
||||||
|
required: true
|
||||||
|
default: stable
|
||||||
|
type: string
|
||||||
|
unix:
|
||||||
|
description: yt-dlp, yt-dlp.tar.gz, yt-dlp_linux, yt-dlp_linux.zip
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
linux_arm:
|
||||||
|
description: yt-dlp_linux_aarch64, yt-dlp_linux_armv7l
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
macos:
|
||||||
|
description: yt-dlp_macos, yt-dlp_macos.zip
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
macos_legacy:
|
||||||
|
description: yt-dlp_macos_legacy
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
windows:
|
||||||
|
description: yt-dlp.exe, yt-dlp_min.exe, yt-dlp_win.zip
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
windows32:
|
||||||
|
description: yt-dlp_x86.exe
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
meta_files:
|
||||||
|
description: SHA2-256SUMS, SHA2-512SUMS, _update_spec
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
|
||||||
permissions:
|
permissions:
|
||||||
contents: read
|
contents: read
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
prepare:
|
unix:
|
||||||
permissions:
|
if: inputs.unix
|
||||||
contents: write # for push_release
|
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
outputs:
|
|
||||||
version_suffix: ${{ steps.version_suffix.outputs.version_suffix }}
|
|
||||||
ytdlp_version: ${{ steps.bump_version.outputs.ytdlp_version }}
|
|
||||||
head_sha: ${{ steps.push_release.outputs.head_sha }}
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v3
|
||||||
with:
|
- uses: actions/setup-python@v4
|
||||||
fetch-depth: 0
|
with:
|
||||||
- uses: actions/setup-python@v4
|
python-version: "3.10"
|
||||||
with:
|
- uses: conda-incubator/setup-miniconda@v2
|
||||||
python-version: '3.10'
|
with:
|
||||||
|
|
||||||
- name: Set version suffix
|
|
||||||
id: version_suffix
|
|
||||||
env:
|
|
||||||
PUSH_VERSION_COMMIT: ${{ secrets.PUSH_VERSION_COMMIT }}
|
|
||||||
if: "env.PUSH_VERSION_COMMIT == ''"
|
|
||||||
run: echo "version_suffix=$(date -u +"%H%M%S")" >> "$GITHUB_OUTPUT"
|
|
||||||
- name: Bump version
|
|
||||||
id: bump_version
|
|
||||||
run: |
|
|
||||||
python devscripts/update-version.py ${{ steps.version_suffix.outputs.version_suffix }}
|
|
||||||
make issuetemplates
|
|
||||||
|
|
||||||
- name: Push to release
|
|
||||||
id: push_release
|
|
||||||
run: |
|
|
||||||
git config --global user.name github-actions
|
|
||||||
git config --global user.email github-actions@example.com
|
|
||||||
git add -u
|
|
||||||
git commit -m "[version] update" -m "Created by: ${{ github.event.sender.login }}" -m ":ci skip all :ci run dl"
|
|
||||||
git push origin --force ${{ github.event.ref }}:release
|
|
||||||
echo "head_sha=$(git rev-parse HEAD)" >> "$GITHUB_OUTPUT"
|
|
||||||
- name: Update master
|
|
||||||
env:
|
|
||||||
PUSH_VERSION_COMMIT: ${{ secrets.PUSH_VERSION_COMMIT }}
|
|
||||||
if: "env.PUSH_VERSION_COMMIT != ''"
|
|
||||||
run: git push origin ${{ github.event.ref }}
|
|
||||||
|
|
||||||
|
|
||||||
build_unix:
|
|
||||||
needs: prepare
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
- uses: actions/setup-python@v4
|
|
||||||
with:
|
|
||||||
python-version: '3.10'
|
|
||||||
- uses: conda-incubator/setup-miniconda@v2
|
|
||||||
with:
|
|
||||||
miniforge-variant: Mambaforge
|
miniforge-variant: Mambaforge
|
||||||
use-mamba: true
|
use-mamba: true
|
||||||
channels: conda-forge
|
channels: conda-forge
|
||||||
auto-update-conda: true
|
auto-update-conda: true
|
||||||
activate-environment: ''
|
activate-environment: ""
|
||||||
auto-activate-base: false
|
auto-activate-base: false
|
||||||
- name: Install Requirements
|
- name: Install Requirements
|
||||||
run: |
|
run: |
|
||||||
sudo apt-get -y install zip pandoc man sed
|
sudo apt-get -y install zip pandoc man sed
|
||||||
python -m pip install -U pip setuptools wheel twine
|
python -m pip install -U pip setuptools wheel
|
||||||
python -m pip install -U Pyinstaller -r requirements.txt
|
python -m pip install -U Pyinstaller -r requirements.txt
|
||||||
reqs=$(mktemp)
|
reqs=$(mktemp)
|
||||||
echo -e 'python=3.10.*\npyinstaller' >$reqs
|
cat > $reqs << EOF
|
||||||
sed 's/^brotli.*/brotli-python/' <requirements.txt >>$reqs
|
python=3.10.*
|
||||||
|
pyinstaller
|
||||||
|
cffi
|
||||||
|
brotli-python
|
||||||
|
EOF
|
||||||
|
sed '/^brotli.*/d' requirements.txt >> $reqs
|
||||||
mamba create -n build --file $reqs
|
mamba create -n build --file $reqs
|
||||||
|
|
||||||
- name: Prepare
|
- name: Prepare
|
||||||
run: |
|
run: |
|
||||||
python devscripts/update-version.py ${{ needs.prepare.outputs.version_suffix }}
|
python devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
|
||||||
python devscripts/make_lazy_extractors.py
|
python devscripts/make_lazy_extractors.py
|
||||||
- name: Build Unix platform-independent binary
|
- name: Build Unix platform-independent binary
|
||||||
run: |
|
run: |
|
||||||
make all tar
|
make all tar
|
||||||
- name: Build Unix standalone binary
|
- name: Build Unix standalone binary
|
||||||
shell: bash -l {0}
|
shell: bash -l {0}
|
||||||
run: |
|
run: |
|
||||||
unset LD_LIBRARY_PATH # Harmful; set by setup-python
|
unset LD_LIBRARY_PATH # Harmful; set by setup-python
|
||||||
conda activate build
|
conda activate build
|
||||||
python pyinst.py --onedir
|
python pyinst.py --onedir
|
||||||
(cd ./dist/yt-dlp_linux && zip -r ../yt-dlp_linux.zip .)
|
(cd ./dist/yt-dlp_linux && zip -r ../yt-dlp_linux.zip .)
|
||||||
python pyinst.py
|
python pyinst.py
|
||||||
|
mv ./dist/yt-dlp_linux ./yt-dlp_linux
|
||||||
|
mv ./dist/yt-dlp_linux.zip ./yt-dlp_linux.zip
|
||||||
|
|
||||||
- name: Upload artifacts
|
- name: Verify --update-to
|
||||||
uses: actions/upload-artifact@v3
|
if: vars.UPDATE_TO_VERIFICATION
|
||||||
with:
|
run: |
|
||||||
path: |
|
binaries=("yt-dlp" "yt-dlp_linux")
|
||||||
yt-dlp
|
for binary in "${binaries[@]}"; do
|
||||||
yt-dlp.tar.gz
|
chmod +x ./${binary}
|
||||||
dist/yt-dlp_linux
|
cp ./${binary} ./${binary}_downgraded
|
||||||
dist/yt-dlp_linux.zip
|
version="$(./${binary} --version)"
|
||||||
|
./${binary}_downgraded -v --update-to yt-dlp/yt-dlp@2023.03.04
|
||||||
|
downgraded_version="$(./${binary}_downgraded --version)"
|
||||||
|
[[ "$version" != "$downgraded_version" ]]
|
||||||
|
done
|
||||||
|
|
||||||
- name: Build and publish on PyPi
|
- name: Upload artifacts
|
||||||
env:
|
uses: actions/upload-artifact@v3
|
||||||
TWINE_USERNAME: __token__
|
with:
|
||||||
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
|
path: |
|
||||||
if: "env.TWINE_PASSWORD != ''"
|
yt-dlp
|
||||||
run: |
|
yt-dlp.tar.gz
|
||||||
rm -rf dist/*
|
yt-dlp_linux
|
||||||
python devscripts/set-variant.py pip -M "You installed yt-dlp with pip or using the wheel from PyPi; Use that to update"
|
yt-dlp_linux.zip
|
||||||
python setup.py sdist bdist_wheel
|
|
||||||
twine upload dist/*
|
|
||||||
|
|
||||||
- name: Install SSH private key for Homebrew
|
linux_arm:
|
||||||
env:
|
if: inputs.linux_arm
|
||||||
BREW_TOKEN: ${{ secrets.BREW_TOKEN }}
|
|
||||||
if: "env.BREW_TOKEN != ''"
|
|
||||||
uses: yt-dlp/ssh-agent@v0.5.3
|
|
||||||
with:
|
|
||||||
ssh-private-key: ${{ env.BREW_TOKEN }}
|
|
||||||
- name: Update Homebrew Formulae
|
|
||||||
env:
|
|
||||||
BREW_TOKEN: ${{ secrets.BREW_TOKEN }}
|
|
||||||
if: "env.BREW_TOKEN != ''"
|
|
||||||
run: |
|
|
||||||
git clone git@github.com:yt-dlp/homebrew-taps taps/
|
|
||||||
python devscripts/update-formulae.py taps/Formula/yt-dlp.rb "${{ needs.prepare.outputs.ytdlp_version }}"
|
|
||||||
git -C taps/ config user.name github-actions
|
|
||||||
git -C taps/ config user.email github-actions@example.com
|
|
||||||
git -C taps/ commit -am 'yt-dlp: ${{ needs.prepare.outputs.ytdlp_version }}'
|
|
||||||
git -C taps/ push
|
|
||||||
|
|
||||||
|
|
||||||
build_linux_arm:
|
|
||||||
permissions:
|
permissions:
|
||||||
packages: write # for Creating cache
|
contents: read
|
||||||
|
packages: write # for creating cache
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
needs: prepare
|
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
architecture:
|
architecture:
|
||||||
- armv7
|
- armv7
|
||||||
- aarch64
|
- aarch64
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v3
|
||||||
with:
|
with:
|
||||||
path: ./repo
|
path: ./repo
|
||||||
- name: Virtualized Install, Prepare & Build
|
- name: Virtualized Install, Prepare & Build
|
||||||
uses: yt-dlp/run-on-arch-action@v2
|
uses: yt-dlp/run-on-arch-action@v2
|
||||||
with:
|
with:
|
||||||
githubToken: ${{ github.token }} # To cache image
|
# Ref: https://github.com/uraimo/run-on-arch-action/issues/55
|
||||||
arch: ${{ matrix.architecture }}
|
env: |
|
||||||
distro: ubuntu18.04 # Standalone executable should be built on minimum supported OS
|
GITHUB_WORKFLOW: build
|
||||||
dockerRunArgs: --volume "${PWD}/repo:/repo"
|
githubToken: ${{ github.token }} # To cache image
|
||||||
install: | # Installing Python 3.10 from the Deadsnakes repo raises errors
|
arch: ${{ matrix.architecture }}
|
||||||
apt update
|
distro: ubuntu18.04 # Standalone executable should be built on minimum supported OS
|
||||||
apt -y install zlib1g-dev python3.8 python3.8-dev python3.8-distutils python3-pip
|
dockerRunArgs: --volume "${PWD}/repo:/repo"
|
||||||
python3.8 -m pip install -U pip setuptools wheel
|
install: | # Installing Python 3.10 from the Deadsnakes repo raises errors
|
||||||
# Cannot access requirements.txt from the repo directory at this stage
|
apt update
|
||||||
python3.8 -m pip install -U Pyinstaller mutagen pycryptodomex websockets brotli certifi
|
apt -y install zlib1g-dev python3.8 python3.8-dev python3.8-distutils python3-pip
|
||||||
|
python3.8 -m pip install -U pip setuptools wheel
|
||||||
|
# Cannot access requirements.txt from the repo directory at this stage
|
||||||
|
python3.8 -m pip install -U Pyinstaller mutagen pycryptodomex websockets brotli certifi
|
||||||
|
|
||||||
run: |
|
run: |
|
||||||
cd repo
|
cd repo
|
||||||
python3.8 -m pip install -U Pyinstaller -r requirements.txt # Cached version may be out of date
|
python3.8 -m pip install -U Pyinstaller -r requirements.txt # Cached version may be out of date
|
||||||
python3.8 devscripts/update-version.py ${{ needs.prepare.outputs.version_suffix }}
|
python3.8 devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
|
||||||
python3.8 devscripts/make_lazy_extractors.py
|
python3.8 devscripts/make_lazy_extractors.py
|
||||||
python3.8 pyinst.py
|
python3.8 pyinst.py
|
||||||
|
|
||||||
- name: Upload artifacts
|
if ${{ vars.UPDATE_TO_VERIFICATION && 'true' || 'false' }}; then
|
||||||
uses: actions/upload-artifact@v3
|
arch="${{ (matrix.architecture == 'armv7' && 'armv7l') || matrix.architecture }}"
|
||||||
with:
|
chmod +x ./dist/yt-dlp_linux_${arch}
|
||||||
path: | # run-on-arch-action designates armv7l as armv7
|
cp ./dist/yt-dlp_linux_${arch} ./dist/yt-dlp_linux_${arch}_downgraded
|
||||||
repo/dist/yt-dlp_linux_${{ (matrix.architecture == 'armv7' && 'armv7l') || matrix.architecture }}
|
version="$(./dist/yt-dlp_linux_${arch} --version)"
|
||||||
|
./dist/yt-dlp_linux_${arch}_downgraded -v --update-to yt-dlp/yt-dlp@2023.03.04
|
||||||
|
downgraded_version="$(./dist/yt-dlp_linux_${arch}_downgraded --version)"
|
||||||
|
[[ "$version" != "$downgraded_version" ]]
|
||||||
|
fi
|
||||||
|
|
||||||
|
- name: Upload artifacts
|
||||||
|
uses: actions/upload-artifact@v3
|
||||||
|
with:
|
||||||
|
path: | # run-on-arch-action designates armv7l as armv7
|
||||||
|
repo/dist/yt-dlp_linux_${{ (matrix.architecture == 'armv7' && 'armv7l') || matrix.architecture }}
|
||||||
|
|
||||||
build_macos:
|
macos:
|
||||||
|
if: inputs.macos
|
||||||
runs-on: macos-11
|
runs-on: macos-11
|
||||||
needs: prepare
|
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v3
|
||||||
# NB: In order to create a universal2 application, the version of python3 in /usr/bin has to be used
|
# NB: Building universal2 does not work with python from actions/setup-python
|
||||||
- name: Install Requirements
|
- name: Install Requirements
|
||||||
run: |
|
run: |
|
||||||
brew install coreutils
|
brew install coreutils
|
||||||
/usr/bin/python3 -m pip install -U --user pip Pyinstaller -r requirements.txt
|
python3 -m pip install -U --user pip setuptools wheel
|
||||||
|
# We need to ignore wheels otherwise we break universal2 builds
|
||||||
|
python3 -m pip install -U --user --no-binary :all: Pyinstaller -r requirements.txt
|
||||||
|
|
||||||
- name: Prepare
|
- name: Prepare
|
||||||
run: |
|
run: |
|
||||||
/usr/bin/python3 devscripts/update-version.py ${{ needs.prepare.outputs.version_suffix }}
|
python3 devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
|
||||||
/usr/bin/python3 devscripts/make_lazy_extractors.py
|
python3 devscripts/make_lazy_extractors.py
|
||||||
- name: Build
|
- name: Build
|
||||||
run: |
|
run: |
|
||||||
/usr/bin/python3 pyinst.py --target-architecture universal2 --onedir
|
python3 pyinst.py --target-architecture universal2 --onedir
|
||||||
(cd ./dist/yt-dlp_macos && zip -r ../yt-dlp_macos.zip .)
|
(cd ./dist/yt-dlp_macos && zip -r ../yt-dlp_macos.zip .)
|
||||||
/usr/bin/python3 pyinst.py --target-architecture universal2
|
python3 pyinst.py --target-architecture universal2
|
||||||
|
|
||||||
- name: Upload artifacts
|
- name: Verify --update-to
|
||||||
uses: actions/upload-artifact@v3
|
if: vars.UPDATE_TO_VERIFICATION
|
||||||
with:
|
run: |
|
||||||
path: |
|
chmod +x ./dist/yt-dlp_macos
|
||||||
dist/yt-dlp_macos
|
cp ./dist/yt-dlp_macos ./dist/yt-dlp_macos_downgraded
|
||||||
dist/yt-dlp_macos.zip
|
version="$(./dist/yt-dlp_macos --version)"
|
||||||
|
./dist/yt-dlp_macos_downgraded -v --update-to yt-dlp/yt-dlp@2023.03.04
|
||||||
|
downgraded_version="$(./dist/yt-dlp_macos_downgraded --version)"
|
||||||
|
[[ "$version" != "$downgraded_version" ]]
|
||||||
|
|
||||||
|
- name: Upload artifacts
|
||||||
|
uses: actions/upload-artifact@v3
|
||||||
|
with:
|
||||||
|
path: |
|
||||||
|
dist/yt-dlp_macos
|
||||||
|
dist/yt-dlp_macos.zip
|
||||||
|
|
||||||
build_macos_legacy:
|
macos_legacy:
|
||||||
|
if: inputs.macos_legacy
|
||||||
runs-on: macos-latest
|
runs-on: macos-latest
|
||||||
needs: prepare
|
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v3
|
||||||
- name: Install Python
|
- name: Install Python
|
||||||
# We need the official Python, because the GA ones only support newer macOS versions
|
# We need the official Python, because the GA ones only support newer macOS versions
|
||||||
env:
|
env:
|
||||||
PYTHON_VERSION: 3.10.5
|
PYTHON_VERSION: 3.10.5
|
||||||
MACOSX_DEPLOYMENT_TARGET: 10.9 # Used up by the Python build tools
|
MACOSX_DEPLOYMENT_TARGET: 10.9 # Used up by the Python build tools
|
||||||
run: |
|
run: |
|
||||||
# Hack to get the latest patch version. Uncomment if needed
|
# Hack to get the latest patch version. Uncomment if needed
|
||||||
#brew install python@3.10
|
#brew install python@3.10
|
||||||
#export PYTHON_VERSION=$( $(brew --prefix)/opt/python@3.10/bin/python3 --version | cut -d ' ' -f 2 )
|
#export PYTHON_VERSION=$( $(brew --prefix)/opt/python@3.10/bin/python3 --version | cut -d ' ' -f 2 )
|
||||||
curl https://www.python.org/ftp/python/${PYTHON_VERSION}/python-${PYTHON_VERSION}-macos11.pkg -o "python.pkg"
|
curl https://www.python.org/ftp/python/${PYTHON_VERSION}/python-${PYTHON_VERSION}-macos11.pkg -o "python.pkg"
|
||||||
sudo installer -pkg python.pkg -target /
|
sudo installer -pkg python.pkg -target /
|
||||||
python3 --version
|
python3 --version
|
||||||
- name: Install Requirements
|
- name: Install Requirements
|
||||||
run: |
|
run: |
|
||||||
brew install coreutils
|
brew install coreutils
|
||||||
python3 -m pip install -U --user pip Pyinstaller -r requirements.txt
|
python3 -m pip install -U --user pip setuptools wheel
|
||||||
|
python3 -m pip install -U --user Pyinstaller -r requirements.txt
|
||||||
|
|
||||||
- name: Prepare
|
- name: Prepare
|
||||||
run: |
|
run: |
|
||||||
python3 devscripts/update-version.py ${{ needs.prepare.outputs.version_suffix }}
|
python3 devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
|
||||||
python3 devscripts/make_lazy_extractors.py
|
python3 devscripts/make_lazy_extractors.py
|
||||||
- name: Build
|
- name: Build
|
||||||
run: |
|
run: |
|
||||||
python3 pyinst.py
|
python3 pyinst.py
|
||||||
mv dist/yt-dlp_macos dist/yt-dlp_macos_legacy
|
mv dist/yt-dlp_macos dist/yt-dlp_macos_legacy
|
||||||
|
|
||||||
- name: Upload artifacts
|
- name: Verify --update-to
|
||||||
uses: actions/upload-artifact@v3
|
if: vars.UPDATE_TO_VERIFICATION
|
||||||
with:
|
run: |
|
||||||
path: |
|
chmod +x ./dist/yt-dlp_macos_legacy
|
||||||
dist/yt-dlp_macos_legacy
|
cp ./dist/yt-dlp_macos_legacy ./dist/yt-dlp_macos_legacy_downgraded
|
||||||
|
version="$(./dist/yt-dlp_macos_legacy --version)"
|
||||||
|
./dist/yt-dlp_macos_legacy_downgraded -v --update-to yt-dlp/yt-dlp@2023.03.04
|
||||||
|
downgraded_version="$(./dist/yt-dlp_macos_legacy_downgraded --version)"
|
||||||
|
[[ "$version" != "$downgraded_version" ]]
|
||||||
|
|
||||||
|
- name: Upload artifacts
|
||||||
|
uses: actions/upload-artifact@v3
|
||||||
|
with:
|
||||||
|
path: |
|
||||||
|
dist/yt-dlp_macos_legacy
|
||||||
|
|
||||||
build_windows:
|
windows:
|
||||||
|
if: inputs.windows
|
||||||
runs-on: windows-latest
|
runs-on: windows-latest
|
||||||
needs: prepare
|
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v3
|
||||||
- uses: actions/setup-python@v4
|
- uses: actions/setup-python@v4
|
||||||
with: # 3.8 is used for Win7 support
|
with: # 3.8 is used for Win7 support
|
||||||
python-version: '3.8'
|
python-version: "3.8"
|
||||||
- name: Install Requirements
|
- name: Install Requirements
|
||||||
run: | # Custom pyinstaller built with https://github.com/yt-dlp/pyinstaller-builds
|
run: | # Custom pyinstaller built with https://github.com/yt-dlp/pyinstaller-builds
|
||||||
python -m pip install -U pip setuptools wheel py2exe
|
python -m pip install -U pip setuptools wheel py2exe
|
||||||
pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/x86_64/pyinstaller-5.3-py3-none-any.whl" -r requirements.txt
|
pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/x86_64/pyinstaller-5.8.0-py3-none-any.whl" -r requirements.txt
|
||||||
|
|
||||||
- name: Prepare
|
- name: Prepare
|
||||||
run: |
|
run: |
|
||||||
python devscripts/update-version.py ${{ needs.prepare.outputs.version_suffix }}
|
python devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
|
||||||
python devscripts/make_lazy_extractors.py
|
python devscripts/make_lazy_extractors.py
|
||||||
- name: Build
|
- name: Build
|
||||||
run: |
|
run: |
|
||||||
python setup.py py2exe
|
python setup.py py2exe
|
||||||
Move-Item ./dist/yt-dlp.exe ./dist/yt-dlp_min.exe
|
Move-Item ./dist/yt-dlp.exe ./dist/yt-dlp_min.exe
|
||||||
python pyinst.py
|
python pyinst.py
|
||||||
python pyinst.py --onedir
|
python pyinst.py --onedir
|
||||||
Compress-Archive -Path ./dist/yt-dlp/* -DestinationPath ./dist/yt-dlp_win.zip
|
Compress-Archive -Path ./dist/yt-dlp/* -DestinationPath ./dist/yt-dlp_win.zip
|
||||||
|
|
||||||
- name: Upload artifacts
|
- name: Verify --update-to
|
||||||
uses: actions/upload-artifact@v3
|
if: vars.UPDATE_TO_VERIFICATION
|
||||||
with:
|
run: |
|
||||||
path: |
|
foreach ($name in @("yt-dlp","yt-dlp_min")) {
|
||||||
dist/yt-dlp.exe
|
Copy-Item "./dist/${name}.exe" "./dist/${name}_downgraded.exe"
|
||||||
dist/yt-dlp_min.exe
|
$version = & "./dist/${name}.exe" --version
|
||||||
dist/yt-dlp_win.zip
|
& "./dist/${name}_downgraded.exe" -v --update-to yt-dlp/yt-dlp@2023.03.04
|
||||||
|
$downgraded_version = & "./dist/${name}_downgraded.exe" --version
|
||||||
|
if ($version -eq $downgraded_version) {
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
- name: Upload artifacts
|
||||||
|
uses: actions/upload-artifact@v3
|
||||||
|
with:
|
||||||
|
path: |
|
||||||
|
dist/yt-dlp.exe
|
||||||
|
dist/yt-dlp_min.exe
|
||||||
|
dist/yt-dlp_win.zip
|
||||||
|
|
||||||
build_windows32:
|
windows32:
|
||||||
|
if: inputs.windows32
|
||||||
runs-on: windows-latest
|
runs-on: windows-latest
|
||||||
needs: prepare
|
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v3
|
||||||
- uses: actions/setup-python@v4
|
- uses: actions/setup-python@v4
|
||||||
with: # 3.7 is used for Vista support. See https://github.com/yt-dlp/yt-dlp/issues/390
|
with: # 3.7 is used for Vista support. See https://github.com/yt-dlp/yt-dlp/issues/390
|
||||||
python-version: '3.7'
|
python-version: "3.7"
|
||||||
architecture: 'x86'
|
architecture: "x86"
|
||||||
- name: Install Requirements
|
- name: Install Requirements
|
||||||
run: |
|
run: |
|
||||||
python -m pip install -U pip setuptools wheel
|
python -m pip install -U pip setuptools wheel
|
||||||
pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/i686/pyinstaller-5.3-py3-none-any.whl" -r requirements.txt
|
pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/i686/pyinstaller-5.8.0-py3-none-any.whl" -r requirements.txt
|
||||||
|
|
||||||
- name: Prepare
|
- name: Prepare
|
||||||
run: |
|
run: |
|
||||||
python devscripts/update-version.py ${{ needs.prepare.outputs.version_suffix }}
|
python devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
|
||||||
python devscripts/make_lazy_extractors.py
|
python devscripts/make_lazy_extractors.py
|
||||||
- name: Build
|
- name: Build
|
||||||
run: |
|
run: |
|
||||||
python pyinst.py
|
python pyinst.py
|
||||||
|
|
||||||
- name: Upload artifacts
|
- name: Verify --update-to
|
||||||
uses: actions/upload-artifact@v3
|
if: vars.UPDATE_TO_VERIFICATION
|
||||||
with:
|
run: |
|
||||||
path: |
|
foreach ($name in @("yt-dlp_x86")) {
|
||||||
dist/yt-dlp_x86.exe
|
Copy-Item "./dist/${name}.exe" "./dist/${name}_downgraded.exe"
|
||||||
|
$version = & "./dist/${name}.exe" --version
|
||||||
|
& "./dist/${name}_downgraded.exe" -v --update-to yt-dlp/yt-dlp@2023.03.04
|
||||||
|
$downgraded_version = & "./dist/${name}_downgraded.exe" --version
|
||||||
|
if ($version -eq $downgraded_version) {
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
- name: Upload artifacts
|
||||||
|
uses: actions/upload-artifact@v3
|
||||||
|
with:
|
||||||
|
path: |
|
||||||
|
dist/yt-dlp_x86.exe
|
||||||
|
|
||||||
publish_release:
|
meta_files:
|
||||||
permissions:
|
if: inputs.meta_files && always() && !cancelled()
|
||||||
contents: write # for action-gh-release
|
needs:
|
||||||
|
- unix
|
||||||
|
- linux_arm
|
||||||
|
- macos
|
||||||
|
- macos_legacy
|
||||||
|
- windows
|
||||||
|
- windows32
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
needs: [prepare, build_unix, build_linux_arm, build_windows, build_windows32, build_macos, build_macos_legacy]
|
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/download-artifact@v3
|
||||||
- uses: actions/download-artifact@v3
|
|
||||||
|
|
||||||
- name: Get Changelog
|
- name: Make SHA2-SUMS files
|
||||||
run: |
|
run: |
|
||||||
changelog=$(grep -oPz '(?s)(?<=### ${{ needs.prepare.outputs.ytdlp_version }}\n{2}).+?(?=\n{2,3}###)' Changelog.md) || true
|
cd ./artifact/
|
||||||
echo "changelog<<EOF" >> $GITHUB_ENV
|
sha256sum * > ../SHA2-256SUMS
|
||||||
echo "$changelog" >> $GITHUB_ENV
|
sha512sum * > ../SHA2-512SUMS
|
||||||
echo "EOF" >> $GITHUB_ENV
|
|
||||||
- name: Make Update spec
|
|
||||||
run: |
|
|
||||||
echo "# This file is used for regulating self-update" >> _update_spec
|
|
||||||
echo "lock 2022.07.18 .+ Python 3.6" >> _update_spec
|
|
||||||
- name: Make SHA2-SUMS files
|
|
||||||
run: |
|
|
||||||
sha256sum artifact/yt-dlp | awk '{print $1 " yt-dlp"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/yt-dlp.tar.gz | awk '{print $1 " yt-dlp.tar.gz"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/yt-dlp.exe | awk '{print $1 " yt-dlp.exe"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/yt-dlp_win.zip | awk '{print $1 " yt-dlp_win.zip"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/yt-dlp_min.exe | awk '{print $1 " yt-dlp_min.exe"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/yt-dlp_x86.exe | awk '{print $1 " yt-dlp_x86.exe"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/yt-dlp_macos | awk '{print $1 " yt-dlp_macos"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/yt-dlp_macos.zip | awk '{print $1 " yt-dlp_macos.zip"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/yt-dlp_macos_legacy | awk '{print $1 " yt-dlp_macos_legacy"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/yt-dlp_linux_armv7l | awk '{print $1 " yt-dlp_linux_armv7l"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/yt-dlp_linux_aarch64 | awk '{print $1 " yt-dlp_linux_aarch64"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/dist/yt-dlp_linux | awk '{print $1 " yt-dlp_linux"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/dist/yt-dlp_linux.zip | awk '{print $1 " yt-dlp_linux.zip"}' >> SHA2-256SUMS
|
|
||||||
sha512sum artifact/yt-dlp | awk '{print $1 " yt-dlp"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/yt-dlp.tar.gz | awk '{print $1 " yt-dlp.tar.gz"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/yt-dlp.exe | awk '{print $1 " yt-dlp.exe"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/yt-dlp_win.zip | awk '{print $1 " yt-dlp_win.zip"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/yt-dlp_min.exe | awk '{print $1 " yt-dlp_min.exe"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/yt-dlp_x86.exe | awk '{print $1 " yt-dlp_x86.exe"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/yt-dlp_macos | awk '{print $1 " yt-dlp_macos"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/yt-dlp_macos.zip | awk '{print $1 " yt-dlp_macos.zip"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/yt-dlp_macos_legacy | awk '{print $1 " yt-dlp_macos_legacy"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/yt-dlp_linux_armv7l | awk '{print $1 " yt-dlp_linux_armv7l"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/yt-dlp_linux_aarch64 | awk '{print $1 " yt-dlp_linux_aarch64"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/dist/yt-dlp_linux | awk '{print $1 " yt-dlp_linux"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/dist/yt-dlp_linux.zip | awk '{print $1 " yt-dlp_linux.zip"}' >> SHA2-512SUMS
|
|
||||||
|
|
||||||
- name: Publish Release
|
- name: Make Update spec
|
||||||
uses: yt-dlp/action-gh-release@v1
|
run: |
|
||||||
with:
|
cat >> _update_spec << EOF
|
||||||
tag_name: ${{ needs.prepare.outputs.ytdlp_version }}
|
# This file is used for regulating self-update
|
||||||
name: yt-dlp ${{ needs.prepare.outputs.ytdlp_version }}
|
lock 2022.08.18.36 .+ Python 3.6
|
||||||
target_commitish: ${{ needs.prepare.outputs.head_sha }}
|
EOF
|
||||||
body: |
|
|
||||||
#### [A description of the various files]((https://github.com/yt-dlp/yt-dlp#release-files)) are in the README
|
|
||||||
|
|
||||||
---
|
- name: Sign checksum files
|
||||||
<details open><summary><h3>Changelog</summary>
|
env:
|
||||||
<p>
|
GPG_SIGNING_KEY: ${{ secrets.GPG_SIGNING_KEY }}
|
||||||
|
if: env.GPG_SIGNING_KEY != ''
|
||||||
|
run: |
|
||||||
|
gpg --batch --import <<< "${{ secrets.GPG_SIGNING_KEY }}"
|
||||||
|
for signfile in ./SHA*SUMS; do
|
||||||
|
gpg --batch --detach-sign "$signfile"
|
||||||
|
done
|
||||||
|
|
||||||
${{ env.changelog }}
|
- name: Upload artifacts
|
||||||
|
uses: actions/upload-artifact@v3
|
||||||
</p>
|
with:
|
||||||
</details>
|
path: |
|
||||||
files: |
|
SHA*SUMS*
|
||||||
SHA2-256SUMS
|
_update_spec
|
||||||
SHA2-512SUMS
|
|
||||||
artifact/yt-dlp
|
|
||||||
artifact/yt-dlp.tar.gz
|
|
||||||
artifact/yt-dlp.exe
|
|
||||||
artifact/yt-dlp_win.zip
|
|
||||||
artifact/yt-dlp_min.exe
|
|
||||||
artifact/yt-dlp_x86.exe
|
|
||||||
artifact/yt-dlp_macos
|
|
||||||
artifact/yt-dlp_macos.zip
|
|
||||||
artifact/yt-dlp_macos_legacy
|
|
||||||
artifact/yt-dlp_linux_armv7l
|
|
||||||
artifact/yt-dlp_linux_aarch64
|
|
||||||
artifact/dist/yt-dlp_linux
|
|
||||||
artifact/dist/yt-dlp_linux.zip
|
|
||||||
_update_spec
|
|
||||||
|
97
.github/workflows/publish.yml
vendored
Normal file
97
.github/workflows/publish.yml
vendored
Normal file
@ -0,0 +1,97 @@
|
|||||||
|
name: Publish
|
||||||
|
on:
|
||||||
|
workflow_call:
|
||||||
|
inputs:
|
||||||
|
channel:
|
||||||
|
default: stable
|
||||||
|
required: true
|
||||||
|
type: string
|
||||||
|
version:
|
||||||
|
required: true
|
||||||
|
type: string
|
||||||
|
target_commitish:
|
||||||
|
required: true
|
||||||
|
type: string
|
||||||
|
prerelease:
|
||||||
|
default: false
|
||||||
|
required: true
|
||||||
|
type: boolean
|
||||||
|
secrets:
|
||||||
|
ARCHIVE_REPO_TOKEN:
|
||||||
|
required: false
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
contents: write
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
publish:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
- uses: actions/download-artifact@v3
|
||||||
|
- uses: actions/setup-python@v4
|
||||||
|
with:
|
||||||
|
python-version: "3.10"
|
||||||
|
|
||||||
|
- name: Generate release notes
|
||||||
|
run: |
|
||||||
|
printf '%s' \
|
||||||
|
'[![Installation](https://img.shields.io/badge/-Which%20file%20should%20I%20download%3F-white.svg?style=for-the-badge)]' \
|
||||||
|
'(https://github.com/yt-dlp/yt-dlp#installation "Installation instructions") ' \
|
||||||
|
'[![Documentation](https://img.shields.io/badge/-Docs-brightgreen.svg?style=for-the-badge&logo=GitBook&labelColor=555555)]' \
|
||||||
|
'(https://github.com/yt-dlp/yt-dlp/tree/2023.03.04#readme "Documentation") ' \
|
||||||
|
'[![Donate](https://img.shields.io/badge/_-Donate-red.svg?logo=githubsponsors&labelColor=555555&style=for-the-badge)]' \
|
||||||
|
'(https://github.com/yt-dlp/yt-dlp/blob/master/Collaborators.md#collaborators "Donate") ' \
|
||||||
|
'[![Discord](https://img.shields.io/discord/807245652072857610?color=blue&labelColor=555555&label=&logo=discord&style=for-the-badge)]' \
|
||||||
|
'(https://discord.gg/H5MNcFW63r "Discord") ' \
|
||||||
|
${{ inputs.channel != 'nightly' && '"[![Nightly](https://img.shields.io/badge/Get%20nightly%20builds-purple.svg?style=for-the-badge)]" \
|
||||||
|
"(https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/latest \"Nightly builds\")"' || '' }} \
|
||||||
|
> ./RELEASE_NOTES
|
||||||
|
printf '\n\n' >> ./RELEASE_NOTES
|
||||||
|
cat >> ./RELEASE_NOTES << EOF
|
||||||
|
#### A description of the various files are in the [README](https://github.com/yt-dlp/yt-dlp#release-files)
|
||||||
|
---
|
||||||
|
$(python ./devscripts/make_changelog.py -vv --collapsible)
|
||||||
|
EOF
|
||||||
|
printf '%s\n\n' '**This is an automated nightly pre-release build**' >> ./NIGHTLY_NOTES
|
||||||
|
cat ./RELEASE_NOTES >> ./NIGHTLY_NOTES
|
||||||
|
printf '%s\n\n' 'Generated from: https://github.com/${{ github.repository }}/commit/${{ inputs.target_commitish }}' >> ./ARCHIVE_NOTES
|
||||||
|
cat ./RELEASE_NOTES >> ./ARCHIVE_NOTES
|
||||||
|
|
||||||
|
- name: Archive nightly release
|
||||||
|
env:
|
||||||
|
GH_TOKEN: ${{ secrets.ARCHIVE_REPO_TOKEN }}
|
||||||
|
GH_REPO: ${{ vars.ARCHIVE_REPO }}
|
||||||
|
if: |
|
||||||
|
inputs.channel == 'nightly' && env.GH_TOKEN != '' && env.GH_REPO != ''
|
||||||
|
run: |
|
||||||
|
gh release create \
|
||||||
|
--notes-file ARCHIVE_NOTES \
|
||||||
|
--title "yt-dlp nightly ${{ inputs.version }}" \
|
||||||
|
${{ inputs.version }} \
|
||||||
|
artifact/*
|
||||||
|
|
||||||
|
- name: Prune old nightly release
|
||||||
|
if: inputs.channel == 'nightly' && !vars.ARCHIVE_REPO
|
||||||
|
env:
|
||||||
|
GH_TOKEN: ${{ github.token }}
|
||||||
|
run: |
|
||||||
|
gh release delete --yes --cleanup-tag "nightly" || true
|
||||||
|
git tag --delete "nightly" || true
|
||||||
|
sleep 5 # Enough time to cover deletion race condition
|
||||||
|
|
||||||
|
- name: Publish release${{ inputs.channel == 'nightly' && ' (nightly)' || '' }}
|
||||||
|
env:
|
||||||
|
GH_TOKEN: ${{ github.token }}
|
||||||
|
if: (inputs.channel == 'nightly' && !vars.ARCHIVE_REPO) || inputs.channel != 'nightly'
|
||||||
|
run: |
|
||||||
|
gh release create \
|
||||||
|
--notes-file ${{ inputs.channel == 'nightly' && 'NIGHTLY_NOTES' || 'RELEASE_NOTES' }} \
|
||||||
|
--target ${{ inputs.target_commitish }} \
|
||||||
|
--title "yt-dlp ${{ inputs.channel == 'nightly' && 'nightly ' || '' }}${{ inputs.version }}" \
|
||||||
|
${{ inputs.prerelease && '--prerelease' || '' }} \
|
||||||
|
${{ inputs.channel == 'nightly' && '"nightly"' || inputs.version }} \
|
||||||
|
artifact/*
|
52
.github/workflows/release-nightly.yml
vendored
Normal file
52
.github/workflows/release-nightly.yml
vendored
Normal file
@ -0,0 +1,52 @@
|
|||||||
|
name: Release (nightly)
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- master
|
||||||
|
paths:
|
||||||
|
- "yt_dlp/**.py"
|
||||||
|
- "!yt_dlp/version.py"
|
||||||
|
concurrency:
|
||||||
|
group: release-nightly
|
||||||
|
cancel-in-progress: true
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
prepare:
|
||||||
|
if: vars.BUILD_NIGHTLY != ''
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
outputs:
|
||||||
|
version: ${{ steps.get_version.outputs.version }}
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- name: Get version
|
||||||
|
id: get_version
|
||||||
|
run: |
|
||||||
|
python devscripts/update-version.py "$(date -u +"%H%M%S")" | grep -Po "version=\d+(\.\d+){3}" >> "$GITHUB_OUTPUT"
|
||||||
|
|
||||||
|
build:
|
||||||
|
needs: prepare
|
||||||
|
uses: ./.github/workflows/build.yml
|
||||||
|
with:
|
||||||
|
version: ${{ needs.prepare.outputs.version }}
|
||||||
|
channel: nightly
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
packages: write # For package cache
|
||||||
|
secrets:
|
||||||
|
GPG_SIGNING_KEY: ${{ secrets.GPG_SIGNING_KEY }}
|
||||||
|
|
||||||
|
publish:
|
||||||
|
needs: [prepare, build]
|
||||||
|
uses: ./.github/workflows/publish.yml
|
||||||
|
secrets:
|
||||||
|
ARCHIVE_REPO_TOKEN: ${{ secrets.ARCHIVE_REPO_TOKEN }}
|
||||||
|
permissions:
|
||||||
|
contents: write
|
||||||
|
with:
|
||||||
|
channel: nightly
|
||||||
|
prerelease: true
|
||||||
|
version: ${{ needs.prepare.outputs.version }}
|
||||||
|
target_commitish: ${{ github.sha }}
|
163
.github/workflows/release.yml
vendored
Normal file
163
.github/workflows/release.yml
vendored
Normal file
@ -0,0 +1,163 @@
|
|||||||
|
name: Release
|
||||||
|
on:
|
||||||
|
workflow_dispatch:
|
||||||
|
inputs:
|
||||||
|
version:
|
||||||
|
description: Version tag (YYYY.MM.DD[.REV])
|
||||||
|
required: false
|
||||||
|
default: ''
|
||||||
|
type: string
|
||||||
|
channel:
|
||||||
|
description: Update channel (stable/nightly/...)
|
||||||
|
required: false
|
||||||
|
default: ''
|
||||||
|
type: string
|
||||||
|
prerelease:
|
||||||
|
description: Pre-release
|
||||||
|
default: false
|
||||||
|
type: boolean
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
prepare:
|
||||||
|
permissions:
|
||||||
|
contents: write
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
outputs:
|
||||||
|
channel: ${{ steps.set_channel.outputs.channel }}
|
||||||
|
version: ${{ steps.update_version.outputs.version }}
|
||||||
|
head_sha: ${{ steps.get_target.outputs.head_sha }}
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- uses: actions/setup-python@v4
|
||||||
|
with:
|
||||||
|
python-version: "3.10"
|
||||||
|
|
||||||
|
- name: Set channel
|
||||||
|
id: set_channel
|
||||||
|
run: |
|
||||||
|
CHANNEL="${{ github.repository == 'yt-dlp/yt-dlp' && 'stable' || github.repository }}"
|
||||||
|
echo "channel=${{ inputs.channel || '$CHANNEL' }}" > "$GITHUB_OUTPUT"
|
||||||
|
|
||||||
|
- name: Update version
|
||||||
|
id: update_version
|
||||||
|
run: |
|
||||||
|
REVISION="${{ vars.PUSH_VERSION_COMMIT == '' && '$(date -u +"%H%M%S")' || '' }}"
|
||||||
|
REVISION="${{ inputs.prerelease && '$(date -u +"%H%M%S")' || '$REVISION' }}"
|
||||||
|
python devscripts/update-version.py ${{ inputs.version || '$REVISION' }} | \
|
||||||
|
grep -Po "version=\d+\.\d+\.\d+(\.\d+)?" >> "$GITHUB_OUTPUT"
|
||||||
|
|
||||||
|
- name: Update documentation
|
||||||
|
run: |
|
||||||
|
make doc
|
||||||
|
sed '/### /Q' Changelog.md >> ./CHANGELOG
|
||||||
|
echo '### ${{ steps.update_version.outputs.version }}' >> ./CHANGELOG
|
||||||
|
python ./devscripts/make_changelog.py -vv -c >> ./CHANGELOG
|
||||||
|
echo >> ./CHANGELOG
|
||||||
|
grep -Poz '(?s)### \d+\.\d+\.\d+.+' 'Changelog.md' | head -n -1 >> ./CHANGELOG
|
||||||
|
cat ./CHANGELOG > Changelog.md
|
||||||
|
|
||||||
|
- name: Push to release
|
||||||
|
id: push_release
|
||||||
|
if: ${{ !inputs.prerelease }}
|
||||||
|
run: |
|
||||||
|
git config --global user.name github-actions
|
||||||
|
git config --global user.email github-actions@example.com
|
||||||
|
git add -u
|
||||||
|
git commit -m "Release ${{ steps.update_version.outputs.version }}" \
|
||||||
|
-m "Created by: ${{ github.event.sender.login }}" -m ":ci skip all :ci run dl"
|
||||||
|
git push origin --force ${{ github.event.ref }}:release
|
||||||
|
|
||||||
|
- name: Get target commitish
|
||||||
|
id: get_target
|
||||||
|
run: |
|
||||||
|
echo "head_sha=$(git rev-parse HEAD)" >> "$GITHUB_OUTPUT"
|
||||||
|
|
||||||
|
- name: Update master
|
||||||
|
if: vars.PUSH_VERSION_COMMIT != '' && !inputs.prerelease
|
||||||
|
run: git push origin ${{ github.event.ref }}
|
||||||
|
|
||||||
|
build:
|
||||||
|
needs: prepare
|
||||||
|
uses: ./.github/workflows/build.yml
|
||||||
|
with:
|
||||||
|
version: ${{ needs.prepare.outputs.version }}
|
||||||
|
channel: ${{ needs.prepare.outputs.channel }}
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
packages: write # For package cache
|
||||||
|
secrets:
|
||||||
|
GPG_SIGNING_KEY: ${{ secrets.GPG_SIGNING_KEY }}
|
||||||
|
|
||||||
|
publish_pypi_homebrew:
|
||||||
|
needs: [prepare, build]
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- uses: actions/setup-python@v4
|
||||||
|
with:
|
||||||
|
python-version: "3.10"
|
||||||
|
|
||||||
|
- name: Install Requirements
|
||||||
|
run: |
|
||||||
|
sudo apt-get -y install pandoc man
|
||||||
|
python -m pip install -U pip setuptools wheel twine
|
||||||
|
python -m pip install -U -r requirements.txt
|
||||||
|
|
||||||
|
- name: Prepare
|
||||||
|
run: |
|
||||||
|
python devscripts/update-version.py ${{ needs.prepare.outputs.version }}
|
||||||
|
python devscripts/make_lazy_extractors.py
|
||||||
|
|
||||||
|
- name: Build and publish on PyPI
|
||||||
|
env:
|
||||||
|
TWINE_USERNAME: __token__
|
||||||
|
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
|
||||||
|
if: env.TWINE_PASSWORD != '' && !inputs.prerelease
|
||||||
|
run: |
|
||||||
|
rm -rf dist/*
|
||||||
|
make pypi-files
|
||||||
|
python devscripts/set-variant.py pip -M "You installed yt-dlp with pip or using the wheel from PyPi; Use that to update"
|
||||||
|
python setup.py sdist bdist_wheel
|
||||||
|
twine upload dist/*
|
||||||
|
|
||||||
|
- name: Checkout Homebrew repository
|
||||||
|
env:
|
||||||
|
BREW_TOKEN: ${{ secrets.BREW_TOKEN }}
|
||||||
|
PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }}
|
||||||
|
if: env.BREW_TOKEN != '' && env.PYPI_TOKEN != '' && !inputs.prerelease
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
repository: yt-dlp/homebrew-taps
|
||||||
|
path: taps
|
||||||
|
ssh-key: ${{ secrets.BREW_TOKEN }}
|
||||||
|
|
||||||
|
- name: Update Homebrew Formulae
|
||||||
|
env:
|
||||||
|
BREW_TOKEN: ${{ secrets.BREW_TOKEN }}
|
||||||
|
PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }}
|
||||||
|
if: env.BREW_TOKEN != '' && env.PYPI_TOKEN != '' && !inputs.prerelease
|
||||||
|
run: |
|
||||||
|
python devscripts/update-formulae.py taps/Formula/yt-dlp.rb "${{ needs.prepare.outputs.version }}"
|
||||||
|
git -C taps/ config user.name github-actions
|
||||||
|
git -C taps/ config user.email github-actions@example.com
|
||||||
|
git -C taps/ commit -am 'yt-dlp: ${{ needs.prepare.outputs.version }}'
|
||||||
|
git -C taps/ push
|
||||||
|
|
||||||
|
publish:
|
||||||
|
needs: [prepare, build]
|
||||||
|
uses: ./.github/workflows/publish.yml
|
||||||
|
permissions:
|
||||||
|
contents: write
|
||||||
|
with:
|
||||||
|
channel: ${{ needs.prepare.outputs.channel }}
|
||||||
|
prerelease: ${{ inputs.prerelease }}
|
||||||
|
version: ${{ needs.prepare.outputs.version }}
|
||||||
|
target_commitish: ${{ needs.prepare.outputs.head_sha }}
|
@ -79,7 +79,7 @@ ### Are you using the latest version?
|
|||||||
|
|
||||||
### Is the issue already documented?
|
### Is the issue already documented?
|
||||||
|
|
||||||
Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or browse the [GitHub Issues](https://github.com/yt-dlp/yt-dlp/search?type=Issues) of this repository. If there is an issue, feel free to write something along the lines of "This affects me as well, with version 2021.01.01. Here is some more information on the issue: ...". While some issues may be old, a new post into them often spurs rapid activity.
|
Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or browse the [GitHub Issues](https://github.com/yt-dlp/yt-dlp/search?type=Issues) of this repository. If there is an issue, subcribe to it to be notified when there is any progress. Unless you have something useful to add to the converation, please refrain from commenting.
|
||||||
|
|
||||||
Additionally, it is also helpful to see if the issue has already been documented in the [youtube-dl issue tracker](https://github.com/ytdl-org/youtube-dl/issues). If similar issues have already been reported in youtube-dl (but not in our issue tracker), links to them can be included in your issue report here.
|
Additionally, it is also helpful to see if the issue has already been documented in the [youtube-dl issue tracker](https://github.com/ytdl-org/youtube-dl/issues). If similar issues have already been reported in youtube-dl (but not in our issue tracker), links to them can be included in your issue report here.
|
||||||
|
|
||||||
@ -127,7 +127,7 @@ ### Are you willing to share account details if needed?
|
|||||||
|
|
||||||
### Is the website primarily used for piracy?
|
### Is the website primarily used for piracy?
|
||||||
|
|
||||||
We follow [youtube-dl's policy](https://github.com/ytdl-org/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free) to not support services that is primarily used for infringing copyright. Additionally, it has been decided to not to support porn sites that specialize in deep fake. We also cannot support any service that serves only [DRM protected content](https://en.wikipedia.org/wiki/Digital_rights_management).
|
We follow [youtube-dl's policy](https://github.com/ytdl-org/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free) to not support services that is primarily used for infringing copyright. Additionally, it has been decided to not to support porn sites that specialize in fakes. We also cannot support any service that serves only [DRM protected content](https://en.wikipedia.org/wiki/Digital_rights_management).
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@ -246,7 +246,7 @@ ## yt-dlp coding conventions
|
|||||||
|
|
||||||
This section introduces a guide lines for writing idiomatic, robust and future-proof extractor code.
|
This section introduces a guide lines for writing idiomatic, robust and future-proof extractor code.
|
||||||
|
|
||||||
Extractors are very fragile by nature since they depend on the layout of the source data provided by 3rd party media hosters out of your control and this layout tends to change. As an extractor implementer your task is not only to write code that will extract media links and metadata correctly but also to minimize dependency on the source's layout and even to make the code foresee potential future changes and be ready for that. This is important because it will allow the extractor not to break on minor layout changes thus keeping old yt-dlp versions working. Even though this breakage issue may be easily fixed by a new version of yt-dlp, this could take some time, during which the the extractor will remain broken.
|
Extractors are very fragile by nature since they depend on the layout of the source data provided by 3rd party media hosters out of your control and this layout tends to change. As an extractor implementer your task is not only to write code that will extract media links and metadata correctly but also to minimize dependency on the source's layout and even to make the code foresee potential future changes and be ready for that. This is important because it will allow the extractor not to break on minor layout changes thus keeping old yt-dlp versions working. Even though this breakage issue may be easily fixed by a new version of yt-dlp, this could take some time, during which the extractor will remain broken.
|
||||||
|
|
||||||
|
|
||||||
### Mandatory and optional metafields
|
### Mandatory and optional metafields
|
||||||
|
81
CONTRIBUTORS
81
CONTRIBUTORS
@ -4,6 +4,7 @@ coletdjnz/colethedj (collaborator)
|
|||||||
Ashish0804 (collaborator)
|
Ashish0804 (collaborator)
|
||||||
nao20010128nao/Lesmiscore (collaborator)
|
nao20010128nao/Lesmiscore (collaborator)
|
||||||
bashonly (collaborator)
|
bashonly (collaborator)
|
||||||
|
Grub4K (collaborator)
|
||||||
h-h-h-h
|
h-h-h-h
|
||||||
pauldubois98
|
pauldubois98
|
||||||
nixxo
|
nixxo
|
||||||
@ -319,7 +320,6 @@ columndeeply
|
|||||||
DoubleCouponDay
|
DoubleCouponDay
|
||||||
Fabi019
|
Fabi019
|
||||||
GautamMKGarg
|
GautamMKGarg
|
||||||
Grub4K
|
|
||||||
itachi-19
|
itachi-19
|
||||||
jeroenj
|
jeroenj
|
||||||
josanabr
|
josanabr
|
||||||
@ -381,3 +381,82 @@ gschizas
|
|||||||
JC-Chung
|
JC-Chung
|
||||||
mzhou
|
mzhou
|
||||||
OndrejBakan
|
OndrejBakan
|
||||||
|
ab4cbef
|
||||||
|
aionescu
|
||||||
|
amra
|
||||||
|
ByteDream
|
||||||
|
carusocr
|
||||||
|
chexxor
|
||||||
|
felixonmars
|
||||||
|
FrankZ85
|
||||||
|
FriedrichRehren
|
||||||
|
gregsadetsky
|
||||||
|
LeoniePhiline
|
||||||
|
LowSuggestion912
|
||||||
|
Matumo
|
||||||
|
OIRNOIR
|
||||||
|
OMEGARAZER
|
||||||
|
oxamun
|
||||||
|
pmitchell86
|
||||||
|
qbnu
|
||||||
|
qulaz
|
||||||
|
rebane2001
|
||||||
|
road-master
|
||||||
|
rohieb
|
||||||
|
sdht0
|
||||||
|
seproDev
|
||||||
|
Hill-98
|
||||||
|
LXYan2333
|
||||||
|
mushbite
|
||||||
|
venkata-krishnas
|
||||||
|
7vlad7
|
||||||
|
alexklapheke
|
||||||
|
arobase-che
|
||||||
|
bepvte
|
||||||
|
bergoid
|
||||||
|
blmarket
|
||||||
|
brandon-dacrib
|
||||||
|
c-basalt
|
||||||
|
CoryTibbettsDev
|
||||||
|
Cyberes
|
||||||
|
D0LLYNH0
|
||||||
|
danog
|
||||||
|
DataGhost
|
||||||
|
falbrechtskirchinger
|
||||||
|
foreignBlade
|
||||||
|
garret1317
|
||||||
|
hasezoey
|
||||||
|
hoaluvn
|
||||||
|
ItzMaxTV
|
||||||
|
ivanskodje
|
||||||
|
jo-nike
|
||||||
|
kangalio
|
||||||
|
linsui
|
||||||
|
makew0rld
|
||||||
|
menschel
|
||||||
|
mikf
|
||||||
|
mrscrapy
|
||||||
|
NDagestad
|
||||||
|
Neurognostic
|
||||||
|
NextFire
|
||||||
|
nick-cd
|
||||||
|
permunkle
|
||||||
|
pzhlkj6612
|
||||||
|
ringus1
|
||||||
|
rjy
|
||||||
|
Schmoaaaaah
|
||||||
|
sjthespian
|
||||||
|
theperfectpunk
|
||||||
|
toomyzoom
|
||||||
|
truedread
|
||||||
|
TxI5
|
||||||
|
unbeatable-101
|
||||||
|
vampirefrog
|
||||||
|
vidiot720
|
||||||
|
viktor-enzell
|
||||||
|
zhgwn
|
||||||
|
barthelmannk
|
||||||
|
berkanteber
|
||||||
|
OverlordQ
|
||||||
|
rexlambert22
|
||||||
|
Ti4eeT4e
|
||||||
|
512
Changelog.md
512
Changelog.md
@ -1,19 +1,511 @@
|
|||||||
# Changelog
|
# Changelog
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
# Instuctions for creating release
|
# To create a release, dispatch the https://github.com/yt-dlp/yt-dlp/actions/workflows/release.yml workflow on master
|
||||||
|
|
||||||
* Run `make doc`
|
|
||||||
* Update Changelog.md and CONTRIBUTORS
|
|
||||||
* Change "Based on ytdl" version in Readme.md if needed
|
|
||||||
* Commit as `Release <version>` and push to master
|
|
||||||
* Dispatch the workflow https://github.com/yt-dlp/yt-dlp/actions/workflows/build.yml on master
|
|
||||||
-->
|
-->
|
||||||
|
|
||||||
|
### 2023.06.22
|
||||||
|
|
||||||
|
#### Core changes
|
||||||
|
- [Fix bug in db3ad8a67661d7b234a6954d9c6a4a9b1749f5eb](https://github.com/yt-dlp/yt-dlp/commit/d7cd97e8d8d42b500fea9abb2aa4ac9b0f98b2ad) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Improve `--download-sections`](https://github.com/yt-dlp/yt-dlp/commit/b4e0d75848e9447cee2cd3646ce54d4744a7ff56) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Indicate `filesize` approximated from `tbr` better](https://github.com/yt-dlp/yt-dlp/commit/0dff8e4d1e6e9fb938f4256ea9af7d81f42fd54f) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
|
||||||
|
#### Extractor changes
|
||||||
|
- [Support multiple `_VALID_URL`s](https://github.com/yt-dlp/yt-dlp/commit/5fd8367496b42c7b900b896a0d5460561a2859de) ([#5812](https://github.com/yt-dlp/yt-dlp/issues/5812)) by [nixxo](https://github.com/nixxo)
|
||||||
|
- **dplay**: GlobalCyclingNetworkPlus: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/774aa09dd6aa61ced9ec818d1f67e53414d22762) ([#7360](https://github.com/yt-dlp/yt-dlp/issues/7360)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **dropout**: [Fix season extraction](https://github.com/yt-dlp/yt-dlp/commit/db22142f6f817ff673d417b4b78e8db497bf8ab3) ([#7304](https://github.com/yt-dlp/yt-dlp/issues/7304)) by [OverlordQ](https://github.com/OverlordQ)
|
||||||
|
- **motherless**: [Add gallery support, fix groups](https://github.com/yt-dlp/yt-dlp/commit/f2ff0f6f1914b82d4a51681a72cc0828115dcb4a) ([#7211](https://github.com/yt-dlp/yt-dlp/issues/7211)) by [rexlambert22](https://github.com/rexlambert22), [Ti4eeT4e](https://github.com/Ti4eeT4e)
|
||||||
|
- **nebula**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/3f756c8c4095b942cf49788eb0862ceaf57847f2) ([#7156](https://github.com/yt-dlp/yt-dlp/issues/7156)) by [Lamieur](https://github.com/Lamieur), [rohieb](https://github.com/rohieb)
|
||||||
|
- **rheinmaintv**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/98cb1eda7a4cf67c96078980dbd63e6c06ad7f7c) ([#7311](https://github.com/yt-dlp/yt-dlp/issues/7311)) by [barthelmannk](https://github.com/barthelmannk)
|
||||||
|
- **youtube**
|
||||||
|
- [Add `ios` to default clients used](https://github.com/yt-dlp/yt-dlp/commit/1e75d97db21152acc764b30a688e516f04b8a142)
|
||||||
|
- IOS is affected neither by 403 nor by nsig so helps mitigate them preemptively
|
||||||
|
- IOS also has higher bit-rate 'premium' formats though they are not labeled as such
|
||||||
|
- [Improve description parsing performance](https://github.com/yt-dlp/yt-dlp/commit/71dc18fa29263a1ff0472c23d81bfc8dd4422d48) ([#7315](https://github.com/yt-dlp/yt-dlp/issues/7315)) by [berkanteber](https://github.com/berkanteber), [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Improve nsig function name extraction](https://github.com/yt-dlp/yt-dlp/commit/cd810afe2ac5567c822b7424800fc470ef2d0045) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Workaround 403 for android formats](https://github.com/yt-dlp/yt-dlp/commit/81ca451480051d7ce1a31c017e005358345a9149) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
|
||||||
|
#### Misc. changes
|
||||||
|
- [Revert "Add automatic duplicate issue detection"](https://github.com/yt-dlp/yt-dlp/commit/a4486bfc1dc7057efca9dd3fe70d7fa25c56f700)
|
||||||
|
- **cleanup**
|
||||||
|
- Miscellaneous
|
||||||
|
- [7f9c6a6](https://github.com/yt-dlp/yt-dlp/commit/7f9c6a63b16e145495479e9f666f5b9e2ee69e2f) by [bashonly](https://github.com/bashonly)
|
||||||
|
- [812cdfa](https://github.com/yt-dlp/yt-dlp/commit/812cdfa06c33a40e73a8e04b3e6f42c084666a43) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
|
||||||
|
### 2023.06.21
|
||||||
|
|
||||||
|
#### Important changes
|
||||||
|
- YouTube: Improved throttling and signature fixes
|
||||||
|
|
||||||
|
#### Core changes
|
||||||
|
- [Add `--compat-option playlist-match-filter`](https://github.com/yt-dlp/yt-dlp/commit/93b39cdbd9dcf351bfa0c4ee252805b4617fdca9) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Add `--no-quiet`](https://github.com/yt-dlp/yt-dlp/commit/d669772c65e8630162fd6555d0a578b246591921) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Add option `--color`](https://github.com/yt-dlp/yt-dlp/commit/8417f26b8a819cd7ffcd4e000ca3e45033e670fb) ([#6904](https://github.com/yt-dlp/yt-dlp/issues/6904)) by [Grub4K](https://github.com/Grub4K)
|
||||||
|
- [Add option `--netrc-cmd`](https://github.com/yt-dlp/yt-dlp/commit/db3ad8a67661d7b234a6954d9c6a4a9b1749f5eb) ([#6682](https://github.com/yt-dlp/yt-dlp/issues/6682)) by [NDagestad](https://github.com/NDagestad), [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Add option `--xff`](https://github.com/yt-dlp/yt-dlp/commit/c16644642b08e2bf4130a6c5fa01395d8718c990) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Auto-select default format in `-f-`](https://github.com/yt-dlp/yt-dlp/commit/372a0f3b9dadd1e52234b498aa4c7040ef868c7d) ([#7101](https://github.com/yt-dlp/yt-dlp/issues/7101)) by [ivanskodje](https://github.com/ivanskodje), [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Deprecate internal `Youtubedl-no-compression` header](https://github.com/yt-dlp/yt-dlp/commit/955c89584b66fcd0fcfab3e611f1edeb1ca63886) ([#6876](https://github.com/yt-dlp/yt-dlp/issues/6876)) by [coletdjnz](https://github.com/coletdjnz)
|
||||||
|
- [Do not translate newlines in `--print-to-file`](https://github.com/yt-dlp/yt-dlp/commit/9874e82b5a61582169300bea561b3e8899ad1ef7) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Ensure pre-processor errors do not block `--print`](https://github.com/yt-dlp/yt-dlp/commit/f005a35aa7e4f67a0c603a946c0dd714c151b2d6) by [pukkandan](https://github.com/pukkandan) (With fixes in [17ba434](https://github.com/yt-dlp/yt-dlp/commit/17ba4343cf99701692a7f4798fd42b50f644faba))
|
||||||
|
- [Fix `filepath` being copied to underlying format dict](https://github.com/yt-dlp/yt-dlp/commit/84078a8b38f403495d00b46654c8750774d821de) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Improve HTTP redirect handling](https://github.com/yt-dlp/yt-dlp/commit/08916a49c777cb6e000eec092881eb93ec22076c) ([#7094](https://github.com/yt-dlp/yt-dlp/issues/7094)) by [coletdjnz](https://github.com/coletdjnz)
|
||||||
|
- [Populate `filename` and `urls` fields at all stages of `--print`](https://github.com/yt-dlp/yt-dlp/commit/170605840ea9d5ad75da6576485ea7d125b428ee) by [pukkandan](https://github.com/pukkandan) (With fixes in [b5f61b6](https://github.com/yt-dlp/yt-dlp/commit/b5f61b69d4561b81fc98c226b176f0c15493e688))
|
||||||
|
- [Relaxed validation for numeric format filters](https://github.com/yt-dlp/yt-dlp/commit/c3f624ef0a5d7a6ae1c5ffeb243087e9fc7d79dc) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Support decoding multiple content encodings](https://github.com/yt-dlp/yt-dlp/commit/daafbf49b3482edae4d70dd37070be99742a926e) ([#7142](https://github.com/yt-dlp/yt-dlp/issues/7142)) by [coletdjnz](https://github.com/coletdjnz)
|
||||||
|
- [Support loading info.json with a list at it's root](https://github.com/yt-dlp/yt-dlp/commit/ab1de9cb1e39cf421c2b7dc6756c6ff1955bb313) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Workaround erroneous urllib Windows proxy parsing](https://github.com/yt-dlp/yt-dlp/commit/3f66b6fe50f8d5b545712f8b19d5ae62f5373980) ([#7092](https://github.com/yt-dlp/yt-dlp/issues/7092)) by [coletdjnz](https://github.com/coletdjnz)
|
||||||
|
- **cookies**
|
||||||
|
- [Defer extraction of v11 key from keyring](https://github.com/yt-dlp/yt-dlp/commit/9b7a48abd1b187eae1e3f6c9839c47d43ccec00b) by [Grub4K](https://github.com/Grub4K)
|
||||||
|
- [Move `YoutubeDLCookieJar` to cookies module](https://github.com/yt-dlp/yt-dlp/commit/b87e01c123fd560b6a674ce00f45a9459d82d98a) ([#7091](https://github.com/yt-dlp/yt-dlp/issues/7091)) by [coletdjnz](https://github.com/coletdjnz)
|
||||||
|
- [Support custom Safari cookies path](https://github.com/yt-dlp/yt-dlp/commit/a58182b75a05fe0a10c5e94a536711d3ade19c20) ([#6783](https://github.com/yt-dlp/yt-dlp/issues/6783)) by [NextFire](https://github.com/NextFire)
|
||||||
|
- [Update for chromium changes](https://github.com/yt-dlp/yt-dlp/commit/b38d4c941d1993ab27e4c0f8e024e23c2ec0f8f8) ([#6897](https://github.com/yt-dlp/yt-dlp/issues/6897)) by [mbway](https://github.com/mbway)
|
||||||
|
- **Cryptodome**: [Fix `__bool__`](https://github.com/yt-dlp/yt-dlp/commit/98ac902c4979e4529b166e873473bef42baa2e3e) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- **jsinterp**
|
||||||
|
- [Do not compile regex](https://github.com/yt-dlp/yt-dlp/commit/7aeda6cc9e73ada0b0a0b6a6748c66bef63a20a8) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Fix division](https://github.com/yt-dlp/yt-dlp/commit/b4a252fba81f53631c07ca40ce7583f5d19a8a36) ([#7279](https://github.com/yt-dlp/yt-dlp/issues/7279)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- [Fix global object extraction](https://github.com/yt-dlp/yt-dlp/commit/01aba2519a0884ef17d5f85608dbd2a455577147) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Handle `NaN` in bitwise operators](https://github.com/yt-dlp/yt-dlp/commit/1d7656184c6b8aa46b29149893894b3c24f1df00) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Handle negative numbers better](https://github.com/yt-dlp/yt-dlp/commit/7cf51f21916292cd80bdeceb37489f5322f166dd) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- **outtmpl**
|
||||||
|
- [Allow `\n` in replacements and default.](https://github.com/yt-dlp/yt-dlp/commit/78fde6e3398ff11e5d383a66b28664badeab5180) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Fix some minor bugs](https://github.com/yt-dlp/yt-dlp/commit/ebe1b4e34f43c3acad30e4bcb8484681a030c114) by [pukkandan](https://github.com/pukkandan) (With fixes in [1619ab3](https://github.com/yt-dlp/yt-dlp/commit/1619ab3e67d8dc4f86fc7ed292c79345bc0d91a0))
|
||||||
|
- [Support `str.format` syntax inside replacements](https://github.com/yt-dlp/yt-dlp/commit/ec9311c41b111110bc52cfbd6ea682c6fb23f77a) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- **update**
|
||||||
|
- [Better error handling](https://github.com/yt-dlp/yt-dlp/commit/d2e84d5eb01c66fc5304e8566348d65a7be24ed7) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Do not restart into versions without `--update-to`](https://github.com/yt-dlp/yt-dlp/commit/02948a17d903f544363bb20b51a6d8baed7bba08) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Implement `--update-to` repo](https://github.com/yt-dlp/yt-dlp/commit/665472a7de3880578c0b7b3f95c71570c056368e) by [Grub4K](https://github.com/Grub4K), [pukkandan](https://github.com/pukkandan)
|
||||||
|
- **upstream**
|
||||||
|
- [Merged with youtube-dl 07af47](https://github.com/yt-dlp/yt-dlp/commit/42f2d40b475db66486a4b4fe5b56751a640db5db) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Merged with youtube-dl d1c6c5](https://github.com/yt-dlp/yt-dlp/commit/4823ec9f461512daa1b8ab362893bb86a6320b26) by [pukkandan](https://github.com/pukkandan) (With fixes in [edbe5b5](https://github.com/yt-dlp/yt-dlp/commit/edbe5b589dd0860a67b4e03f58db3cd2539d91c2) by [bashonly](https://github.com/bashonly))
|
||||||
|
- **utils**
|
||||||
|
- `FormatSorter`: [Improve `size` and `br`](https://github.com/yt-dlp/yt-dlp/commit/eedda5252c05327748dede204a8fccafa0288118) by [pukkandan](https://github.com/pukkandan), [u-spec-png](https://github.com/u-spec-png)
|
||||||
|
- `js_to_json`: [Implement template strings](https://github.com/yt-dlp/yt-dlp/commit/0898c5c8ccadfc404472456a7a7751b72afebadd) ([#6623](https://github.com/yt-dlp/yt-dlp/issues/6623)) by [Grub4K](https://github.com/Grub4K)
|
||||||
|
- `locked_file`: [Fix for virtiofs](https://github.com/yt-dlp/yt-dlp/commit/45998b3e371b819ce0dbe50da703809a048cc2fe) ([#6840](https://github.com/yt-dlp/yt-dlp/issues/6840)) by [brandon-dacrib](https://github.com/brandon-dacrib)
|
||||||
|
- `strftime_or_none`: [Handle negative timestamps](https://github.com/yt-dlp/yt-dlp/commit/a35af4306d24c56c6358f89cdf204860d1cd62b4) by [dirkf](https://github.com/dirkf), [pukkandan](https://github.com/pukkandan)
|
||||||
|
- `traverse_obj`
|
||||||
|
- [Allow iterables in traversal](https://github.com/yt-dlp/yt-dlp/commit/21b5ec86c2c37d10c5bb97edd7051d3aac16bb3e) ([#6902](https://github.com/yt-dlp/yt-dlp/issues/6902)) by [Grub4K](https://github.com/Grub4K)
|
||||||
|
- [More fixes](https://github.com/yt-dlp/yt-dlp/commit/b079c26f0af8085bccdadc72c61c8164ca5ab0f8) ([#6959](https://github.com/yt-dlp/yt-dlp/issues/6959)) by [Grub4K](https://github.com/Grub4K)
|
||||||
|
- `write_string`: [Fix noconsole behavior](https://github.com/yt-dlp/yt-dlp/commit/3b479100df02e20dd949e046003ae96ddbfced57) by [Grub4K](https://github.com/Grub4K)
|
||||||
|
|
||||||
|
#### Extractor changes
|
||||||
|
- [Do not exit early for unsuitable `url_result`](https://github.com/yt-dlp/yt-dlp/commit/baa922b5c74b10e3b86ff5e6cf6529b3aae8efab) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Do not warn for invalid chapter data in description](https://github.com/yt-dlp/yt-dlp/commit/84ffeb7d5e72e3829319ba7720a8480fc4c7503b) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Extract more metadata from ISM](https://github.com/yt-dlp/yt-dlp/commit/f68434cc74cfd3db01b266476a2eac8329fbb267) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- **abematv**: [Add fallback for title and description extraction and extract more metadata](https://github.com/yt-dlp/yt-dlp/commit/c449c0655d7c8549e6e1389c26b628053b253d39) ([#6994](https://github.com/yt-dlp/yt-dlp/issues/6994)) by [Lesmiscore](https://github.com/Lesmiscore)
|
||||||
|
- **acast**: [Support embeds](https://github.com/yt-dlp/yt-dlp/commit/c91ac833ea99b00506e470a44cf930e4e23378c9) ([#7212](https://github.com/yt-dlp/yt-dlp/issues/7212)) by [pabs3](https://github.com/pabs3)
|
||||||
|
- **adobepass**: [Handle `Charter_Direct` MSO as `Spectrum`](https://github.com/yt-dlp/yt-dlp/commit/ea0570820336a0fe9c3b530d1b0d1e59313274f4) ([#6824](https://github.com/yt-dlp/yt-dlp/issues/6824)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **aeonco**: [Support Youtube embeds](https://github.com/yt-dlp/yt-dlp/commit/ed81b74802b4247ee8d9dc0ef87eb52baefede1c) ([#6591](https://github.com/yt-dlp/yt-dlp/issues/6591)) by [alexklapheke](https://github.com/alexklapheke)
|
||||||
|
- **afreecatv**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/fdd69db38924c38194ef236b26325d66ac815c88) ([#6283](https://github.com/yt-dlp/yt-dlp/issues/6283)) by [blmarket](https://github.com/blmarket)
|
||||||
|
- **ARDBetaMediathek**: [Add thumbnail](https://github.com/yt-dlp/yt-dlp/commit/f78eb41e1c0f1dcdb10317358a26bf541dc7ee15) ([#6890](https://github.com/yt-dlp/yt-dlp/issues/6890)) by [StefanLobbenmeier](https://github.com/StefanLobbenmeier)
|
||||||
|
- **bibeltv**: [Fix extraction, support live streams and series](https://github.com/yt-dlp/yt-dlp/commit/4ad58667c102bd82a7c4cca8aa395ec1682e3b4c) ([#6505](https://github.com/yt-dlp/yt-dlp/issues/6505)) by [flashdagger](https://github.com/flashdagger)
|
||||||
|
- **bilibili**
|
||||||
|
- [Support festival videos](https://github.com/yt-dlp/yt-dlp/commit/ab29e47029e2f5b48abbbab78e82faf7cf6e9506) ([#6547](https://github.com/yt-dlp/yt-dlp/issues/6547)) by [qbnu](https://github.com/qbnu)
|
||||||
|
- SpaceVideo: [Extract signature](https://github.com/yt-dlp/yt-dlp/commit/6f10cdcf7eeaeae5b75e0a4428cd649c156a2d83) ([#7149](https://github.com/yt-dlp/yt-dlp/issues/7149)) by [elyse0](https://github.com/elyse0)
|
||||||
|
- **biliIntl**: [Add comment extraction](https://github.com/yt-dlp/yt-dlp/commit/b093c38cc9f26b59a8504211d792f053142c847d) ([#6079](https://github.com/yt-dlp/yt-dlp/issues/6079)) by [HobbyistDev](https://github.com/HobbyistDev)
|
||||||
|
- **bitchute**: [Add more fallback subdomains](https://github.com/yt-dlp/yt-dlp/commit/0c4e0fbcade0fc92d14c2a6d63e360fe067f6192) ([#6907](https://github.com/yt-dlp/yt-dlp/issues/6907)) by [Neurognostic](https://github.com/Neurognostic)
|
||||||
|
- **booyah**: [Remove extractor](https://github.com/yt-dlp/yt-dlp/commit/f7f7a877bf8e87fd4eb0ad2494ad948ca7691114) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- **BrainPOP**: [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/979568f26ece80bca72b48f0dd57d676e431059a) ([#6106](https://github.com/yt-dlp/yt-dlp/issues/6106)) by [MinePlayersPE](https://github.com/MinePlayersPE)
|
||||||
|
- **bravotv**
|
||||||
|
- [Detect DRM](https://github.com/yt-dlp/yt-dlp/commit/1fe5bf240e6ade487d18079a62aa36bcc440a27a) ([#7171](https://github.com/yt-dlp/yt-dlp/issues/7171)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/06966cb8966b9aa4f60ab9c44c182a057d4ca3a3) ([#6568](https://github.com/yt-dlp/yt-dlp/issues/6568)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **camfm**: [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/4cbfa570a1b9bd65b0f48770693377e8d842dcb0) ([#7083](https://github.com/yt-dlp/yt-dlp/issues/7083)) by [garret1317](https://github.com/garret1317)
|
||||||
|
- **cbc**
|
||||||
|
- [Fix live extractor, playlist `_VALID_URL`](https://github.com/yt-dlp/yt-dlp/commit/7a7b1376fbce0067cf37566bb47131bc0022638d) ([#6625](https://github.com/yt-dlp/yt-dlp/issues/6625)) by [makew0rld](https://github.com/makew0rld)
|
||||||
|
- [Ignore 426 from API](https://github.com/yt-dlp/yt-dlp/commit/4afb208cf07b59291ae3b0c4efc83945ee5b8812) ([#6781](https://github.com/yt-dlp/yt-dlp/issues/6781)) by [jo-nike](https://github.com/jo-nike)
|
||||||
|
- gem: [Update `_VALID_URL`](https://github.com/yt-dlp/yt-dlp/commit/871c907454693940cb56906ed9ea49fcb7154829) ([#6499](https://github.com/yt-dlp/yt-dlp/issues/6499)) by [makeworld-the-better-one](https://github.com/makeworld-the-better-one)
|
||||||
|
- **cbs**: [Add `ParamountPressExpress` extractor](https://github.com/yt-dlp/yt-dlp/commit/44369c9afa996e14e9f466754481d878811b5b4a) ([#6604](https://github.com/yt-dlp/yt-dlp/issues/6604)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **cbsnews**: [Overhaul extractors](https://github.com/yt-dlp/yt-dlp/commit/f6e43d6fa9804c24525e1fed0a87782754dab7ed) ([#6681](https://github.com/yt-dlp/yt-dlp/issues/6681)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **chilloutzone**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/6f4fc5660f40f3458882a8f51601eae4af7be609) ([#6445](https://github.com/yt-dlp/yt-dlp/issues/6445)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **clipchamp**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/2f07c4c1da4361af213e5791279b9d152d2e4ce3) ([#6978](https://github.com/yt-dlp/yt-dlp/issues/6978)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **comedycentral**: [Add support for movies](https://github.com/yt-dlp/yt-dlp/commit/66468bbf49562ff82670cbbd456c5e8448a6df34) ([#7108](https://github.com/yt-dlp/yt-dlp/issues/7108)) by [sqrtNOT](https://github.com/sqrtNOT)
|
||||||
|
- **crtvg**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/26c517b29c8727e47948d6fff749d5297f0efb60) ([#7168](https://github.com/yt-dlp/yt-dlp/issues/7168)) by [ItzMaxTV](https://github.com/ItzMaxTV)
|
||||||
|
- **crunchyroll**: [Rework with support for movies, music and artists](https://github.com/yt-dlp/yt-dlp/commit/032de83ea9ff2f4977d9c71a93bbc1775597b762) ([#6237](https://github.com/yt-dlp/yt-dlp/issues/6237)) by [Grub4K](https://github.com/Grub4K)
|
||||||
|
- **dacast**: [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/c25cac2f8e5fbac2737a426d7778fd2f0efc5381) ([#6896](https://github.com/yt-dlp/yt-dlp/issues/6896)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **daftsex**: [Update domain and embed player url](https://github.com/yt-dlp/yt-dlp/commit/fc5a7f9b27d2a89b1f3ca7d33a95301c21d832cd) ([#5966](https://github.com/yt-dlp/yt-dlp/issues/5966)) by [JChris246](https://github.com/JChris246)
|
||||||
|
- **DigitalConcertHall**: [Support films](https://github.com/yt-dlp/yt-dlp/commit/55ed4ff73487feb3177b037dfc2ea527e777da3e) ([#7202](https://github.com/yt-dlp/yt-dlp/issues/7202)) by [ItzMaxTV](https://github.com/ItzMaxTV)
|
||||||
|
- **discogs**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/6daaf21092888beff11b807cd46f832f1f9c46a0) ([#6624](https://github.com/yt-dlp/yt-dlp/issues/6624)) by [rjy](https://github.com/rjy)
|
||||||
|
- **dlf**: [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/b423b6a48e0b19260bc95ab7d72d2138d7f124dc) ([#6697](https://github.com/yt-dlp/yt-dlp/issues/6697)) by [nick-cd](https://github.com/nick-cd)
|
||||||
|
- **drtv**: [Fix radio page extraction](https://github.com/yt-dlp/yt-dlp/commit/9a06b7b1891b48cebbe275652ae8025a36d97d97) ([#6552](https://github.com/yt-dlp/yt-dlp/issues/6552)) by [viktor-enzell](https://github.com/viktor-enzell)
|
||||||
|
- **Dumpert**: [Fix m3u8 and support new URL pattern](https://github.com/yt-dlp/yt-dlp/commit/f8ae441501596733e2b967430471643a1d7cacb8) ([#6091](https://github.com/yt-dlp/yt-dlp/issues/6091)) by [DataGhost](https://github.com/DataGhost), [pukkandan](https://github.com/pukkandan)
|
||||||
|
- **elevensports**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/ecfe47973f6603b5367fe2cc3c65274627d94516) ([#7172](https://github.com/yt-dlp/yt-dlp/issues/7172)) by [ItzMaxTV](https://github.com/ItzMaxTV)
|
||||||
|
- **ettutv**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/83465fc4100a2fb2c188898fbc2f3021f6a9b4dd) ([#6579](https://github.com/yt-dlp/yt-dlp/issues/6579)) by [elyse0](https://github.com/elyse0)
|
||||||
|
- **europarl**: [Rewrite extractor](https://github.com/yt-dlp/yt-dlp/commit/03789976d301eaed3e957dbc041573098f6af059) ([#7114](https://github.com/yt-dlp/yt-dlp/issues/7114)) by [HobbyistDev](https://github.com/HobbyistDev)
|
||||||
|
- **eurosport**: [Improve `_VALID_URL`](https://github.com/yt-dlp/yt-dlp/commit/45e87ea106ad37b2a002663fa30ee41ce97b16cd) ([#7076](https://github.com/yt-dlp/yt-dlp/issues/7076)) by [HobbyistDev](https://github.com/HobbyistDev)
|
||||||
|
- **facebook**: [Fix metadata extraction](https://github.com/yt-dlp/yt-dlp/commit/3b52a606881e6adadc33444abdeacce562b79330) ([#6856](https://github.com/yt-dlp/yt-dlp/issues/6856)) by [ringus1](https://github.com/ringus1)
|
||||||
|
- **foxnews**: [Fix extractors](https://github.com/yt-dlp/yt-dlp/commit/97d60ad8cd6c99f01e463a9acfce8693aff2a609) ([#7222](https://github.com/yt-dlp/yt-dlp/issues/7222)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **funker530**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/cab94a0cd8b6d3fffed5a6faff030274adbed182) ([#7291](https://github.com/yt-dlp/yt-dlp/issues/7291)) by [Cyberes](https://github.com/Cyberes)
|
||||||
|
- **generic**
|
||||||
|
- [Accept values for `fragment_query`, `variant_query`](https://github.com/yt-dlp/yt-dlp/commit/5cc0a8fd2e9fec50026fb92170b57993af939e4a) ([#6600](https://github.com/yt-dlp/yt-dlp/issues/6600)) by [bashonly](https://github.com/bashonly) (With fixes in [9bfe0d1](https://github.com/yt-dlp/yt-dlp/commit/9bfe0d15bd7dbdc6b0e6378fa9f5e2e289b2373b))
|
||||||
|
- [Add extractor-args `hls_key`, `variant_query`](https://github.com/yt-dlp/yt-dlp/commit/c2e0fc40a73dd85ab3920f977f579d475e66ef59) ([#6567](https://github.com/yt-dlp/yt-dlp/issues/6567)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- [Attempt to detect live HLS](https://github.com/yt-dlp/yt-dlp/commit/93e7c6995e07dafb9dcc06c0d06acf6c5bdfecc5) ([#6775](https://github.com/yt-dlp/yt-dlp/issues/6775)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **genius**: [Add support for articles](https://github.com/yt-dlp/yt-dlp/commit/460da07439718d9af1e3661da2a23e05a913a2e6) ([#6474](https://github.com/yt-dlp/yt-dlp/issues/6474)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **globalplayer**: [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/30647668a92a0ca5cd108776804baac0996bd9f7) ([#6903](https://github.com/yt-dlp/yt-dlp/issues/6903)) by [garret1317](https://github.com/garret1317)
|
||||||
|
- **gmanetwork**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/2d97d154fe4fb84fe2ed3a4e1ed5819e89b71e88) ([#5945](https://github.com/yt-dlp/yt-dlp/issues/5945)) by [HobbyistDev](https://github.com/HobbyistDev)
|
||||||
|
- **gronkh**: [Extract duration and chapters](https://github.com/yt-dlp/yt-dlp/commit/9c92b803fa24e48543ce969468d5404376e315b7) ([#6817](https://github.com/yt-dlp/yt-dlp/issues/6817)) by [satan1st](https://github.com/satan1st)
|
||||||
|
- **hentaistigma**: [Remove extractor](https://github.com/yt-dlp/yt-dlp/commit/04f8018a0544736a18494bc3899d06b05b78fae6) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- **hidive**: [Fix login](https://github.com/yt-dlp/yt-dlp/commit/e6ab678e36c40ded0aae305bbb866cdab554d417) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- **hollywoodreporter**: [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/6bdb64e2a2a6d504d8ce1dc830fbfb8a7f199c63) ([#6614](https://github.com/yt-dlp/yt-dlp/issues/6614)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **hotstar**: [Support `/shows/` URLs](https://github.com/yt-dlp/yt-dlp/commit/7f8ddebbb51c9fd4a347306332a718ba41b371b8) ([#7225](https://github.com/yt-dlp/yt-dlp/issues/7225)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **hrefli**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/7e35526d5b970a034b9d76215ee3e4bd7631edcd) ([#6762](https://github.com/yt-dlp/yt-dlp/issues/6762)) by [selfisekai](https://github.com/selfisekai)
|
||||||
|
- **idolplus**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/5c14b213679ed4401288bdc86ae696932e219222) ([#6732](https://github.com/yt-dlp/yt-dlp/issues/6732)) by [ping](https://github.com/ping)
|
||||||
|
- **iq**: [Set more language codes](https://github.com/yt-dlp/yt-dlp/commit/2d5cae9636714ff922d28c548c349d5f2b48f317) ([#6476](https://github.com/yt-dlp/yt-dlp/issues/6476)) by [D0LLYNH0](https://github.com/D0LLYNH0)
|
||||||
|
- **iwara**
|
||||||
|
- [Accept old URLs](https://github.com/yt-dlp/yt-dlp/commit/ab92d8651c48d247dfb7d3f0a824cc986e47c7ed) by [Lesmiscore](https://github.com/Lesmiscore)
|
||||||
|
- [Fix authentication](https://github.com/yt-dlp/yt-dlp/commit/0a5d7c39e17bb9bd50c9db42bcad40eb82d7f784) ([#7137](https://github.com/yt-dlp/yt-dlp/issues/7137)) by [toomyzoom](https://github.com/toomyzoom)
|
||||||
|
- [Fix format sorting](https://github.com/yt-dlp/yt-dlp/commit/56793f74c36899742d7abd52afb0deca97d469e1) ([#6651](https://github.com/yt-dlp/yt-dlp/issues/6651)) by [hasezoey](https://github.com/hasezoey)
|
||||||
|
- [Fix typo](https://github.com/yt-dlp/yt-dlp/commit/d1483ec693c79f0b4ddf493870bcb840aca4da08) by [Lesmiscore](https://github.com/Lesmiscore)
|
||||||
|
- [Implement login](https://github.com/yt-dlp/yt-dlp/commit/21b9413cf7dd4830b2ece57af21589dd4538fc52) ([#6721](https://github.com/yt-dlp/yt-dlp/issues/6721)) by [toomyzoom](https://github.com/toomyzoom)
|
||||||
|
- [Overhaul extractors](https://github.com/yt-dlp/yt-dlp/commit/c14af7a741931b364bab3d9546c0f4359f318f8c) ([#6557](https://github.com/yt-dlp/yt-dlp/issues/6557)) by [Lesmiscore](https://github.com/Lesmiscore)
|
||||||
|
- [Report private videos](https://github.com/yt-dlp/yt-dlp/commit/95a383be1b6fb00c92ee3fb091732c4f6009acb6) ([#6641](https://github.com/yt-dlp/yt-dlp/issues/6641)) by [Lesmiscore](https://github.com/Lesmiscore)
|
||||||
|
- **JStream**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/3459d3c5af3b2572ed51e8ecfda6c11022a838c6) ([#6252](https://github.com/yt-dlp/yt-dlp/issues/6252)) by [Lesmiscore](https://github.com/Lesmiscore)
|
||||||
|
- **jwplatform**: [Update `_extract_embed_urls`](https://github.com/yt-dlp/yt-dlp/commit/cf9fd52fabe71d6e7c30d3ea525029ffa561fc9c) ([#6383](https://github.com/yt-dlp/yt-dlp/issues/6383)) by [carusocr](https://github.com/carusocr)
|
||||||
|
- **kick**: [Make initial request non-fatal](https://github.com/yt-dlp/yt-dlp/commit/0a6918a4a1431960181d8c50e0bbbcb0afbaff9a) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **LastFM**: [Rewrite playlist extraction](https://github.com/yt-dlp/yt-dlp/commit/026435714cb7c39613a0d7d2acd15d3823b78d94) ([#6379](https://github.com/yt-dlp/yt-dlp/issues/6379)) by [hatienl0i261299](https://github.com/hatienl0i261299), [pukkandan](https://github.com/pukkandan)
|
||||||
|
- **lbry**: [Extract original quality formats](https://github.com/yt-dlp/yt-dlp/commit/44c0d66442b568d9e1359e669d8b029b08a77fa7) ([#7257](https://github.com/yt-dlp/yt-dlp/issues/7257)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **line**: [Remove extractors](https://github.com/yt-dlp/yt-dlp/commit/faa0332ed69e070cf3bd31390589a596e962f392) ([#6734](https://github.com/yt-dlp/yt-dlp/issues/6734)) by [sian1468](https://github.com/sian1468)
|
||||||
|
- **livestream**: [Support videos with account id](https://github.com/yt-dlp/yt-dlp/commit/bfdf144c7e5d7a93fbfa9d8e65598c72bf2b542a) ([#6324](https://github.com/yt-dlp/yt-dlp/issues/6324)) by [theperfectpunk](https://github.com/theperfectpunk)
|
||||||
|
- **medaltv**: [Fix clips](https://github.com/yt-dlp/yt-dlp/commit/1e3c2b6ec28d7ab5e31341fa93c47b65be4fbff4) ([#6502](https://github.com/yt-dlp/yt-dlp/issues/6502)) by [xenova](https://github.com/xenova)
|
||||||
|
- **mediastream**: [Improve `WinSports` and embed extraction](https://github.com/yt-dlp/yt-dlp/commit/03025b6e105139d01cd415ddc51fd692957fd2ba) ([#6426](https://github.com/yt-dlp/yt-dlp/issues/6426)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **mgtv**: [Fix formats extraction](https://github.com/yt-dlp/yt-dlp/commit/59d9fe08312bbb76ee26238d207a8ca35410a48d) ([#7234](https://github.com/yt-dlp/yt-dlp/issues/7234)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **Mzaalo**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/dc3c44f349ba85af320e706e2a27ad81a78b1c6e) ([#7163](https://github.com/yt-dlp/yt-dlp/issues/7163)) by [ItzMaxTV](https://github.com/ItzMaxTV)
|
||||||
|
- **nbc**: [Fix `NBCStations` direct mp4 formats](https://github.com/yt-dlp/yt-dlp/commit/9be0fe1fd967f62cbf3c60bd14e1021a70abc147) ([#6637](https://github.com/yt-dlp/yt-dlp/issues/6637)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **nebula**: [Add `beta.nebula.tv`](https://github.com/yt-dlp/yt-dlp/commit/cbfe2e5cbe0f4649a91e323a82b8f5f774f36662) ([#6516](https://github.com/yt-dlp/yt-dlp/issues/6516)) by [unbeatable-101](https://github.com/unbeatable-101)
|
||||||
|
- **nekohacker**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/489f51279d00318018478fd7461eddbe3b45297e) ([#7003](https://github.com/yt-dlp/yt-dlp/issues/7003)) by [hasezoey](https://github.com/hasezoey)
|
||||||
|
- **nhk**
|
||||||
|
- [Add `NhkRadiru` extractor](https://github.com/yt-dlp/yt-dlp/commit/8f0be90ecb3b8d862397177bb226f17b245ef933) ([#6819](https://github.com/yt-dlp/yt-dlp/issues/6819)) by [garret1317](https://github.com/garret1317)
|
||||||
|
- [Fix API extraction](https://github.com/yt-dlp/yt-dlp/commit/f41b949a2ef646fbc36375febbe3f0c19d742c0f) ([#7180](https://github.com/yt-dlp/yt-dlp/issues/7180)) by [menschel](https://github.com/menschel), [sjthespian](https://github.com/sjthespian)
|
||||||
|
- `NhkRadiruLive`: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/81c8b9bdd9841b72cbfc1bbff9dab5fb4aa038b0) ([#7332](https://github.com/yt-dlp/yt-dlp/issues/7332)) by [garret1317](https://github.com/garret1317)
|
||||||
|
- **niconico**
|
||||||
|
- [Download comments from the new endpoint](https://github.com/yt-dlp/yt-dlp/commit/52ecc33e221f7de7eb6fed6c22489f0c5fdd2c6d) ([#6773](https://github.com/yt-dlp/yt-dlp/issues/6773)) by [Lesmiscore](https://github.com/Lesmiscore)
|
||||||
|
- live: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/f8f9250fe280d37f0988646cd5cc0072f4d33a6d) ([#5764](https://github.com/yt-dlp/yt-dlp/issues/5764)) by [Lesmiscore](https://github.com/Lesmiscore)
|
||||||
|
- series: [Fix extraction](https://github.com/yt-dlp/yt-dlp/commit/c86e433c35fe5da6cb29f3539eef97497f84ed38) ([#6898](https://github.com/yt-dlp/yt-dlp/issues/6898)) by [sqrtNOT](https://github.com/sqrtNOT)
|
||||||
|
- **nubilesporn**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/d4e6ef40772e0560a8ed33b844ef7549e86837be) ([#6231](https://github.com/yt-dlp/yt-dlp/issues/6231)) by [permunkle](https://github.com/permunkle)
|
||||||
|
- **odnoklassniki**: [Fix formats extraction](https://github.com/yt-dlp/yt-dlp/commit/1a2eb5bda51d8b7a78a65acebf72a0dcf9da196b) ([#7217](https://github.com/yt-dlp/yt-dlp/issues/7217)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **opencast**
|
||||||
|
- [Add ltitools to `_VALID_URL`](https://github.com/yt-dlp/yt-dlp/commit/3588be59cee429a0ab5c4ceb2f162298bb44147d) ([#6371](https://github.com/yt-dlp/yt-dlp/issues/6371)) by [C0D3D3V](https://github.com/C0D3D3V)
|
||||||
|
- [Fix format bug](https://github.com/yt-dlp/yt-dlp/commit/89dbf0848370deaa55af88c3593a2a264124caf5) ([#6512](https://github.com/yt-dlp/yt-dlp/issues/6512)) by [C0D3D3V](https://github.com/C0D3D3V)
|
||||||
|
- **owncloud**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/c6d4b82a8b8bce59b1c9ce5e6d349ea428dac0a7) ([#6533](https://github.com/yt-dlp/yt-dlp/issues/6533)) by [C0D3D3V](https://github.com/C0D3D3V)
|
||||||
|
- **Parler**: [Rewrite extractor](https://github.com/yt-dlp/yt-dlp/commit/80ea6d3dea8483cddd39fc89b5ee1fc06670c33c) ([#6446](https://github.com/yt-dlp/yt-dlp/issues/6446)) by [JChris246](https://github.com/JChris246)
|
||||||
|
- **pgatour**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/3ae182ad89e1427ff7b1684d6a44ff93fa857a0c) ([#6613](https://github.com/yt-dlp/yt-dlp/issues/6613)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **playsuisse**: [Support new url format](https://github.com/yt-dlp/yt-dlp/commit/94627c5dde12a72766bdba36e056916c29c40ed1) ([#6528](https://github.com/yt-dlp/yt-dlp/issues/6528)) by [sbor23](https://github.com/sbor23)
|
||||||
|
- **polskieradio**: [Improve extractors](https://github.com/yt-dlp/yt-dlp/commit/738c90a463257634455ada3e5c18b714c531dede) ([#5948](https://github.com/yt-dlp/yt-dlp/issues/5948)) by [selfisekai](https://github.com/selfisekai)
|
||||||
|
- **pornez**: [Support new URL formats](https://github.com/yt-dlp/yt-dlp/commit/cbdf9408e6f1e35e98fd6477b3d6902df5b8a47f) ([#6792](https://github.com/yt-dlp/yt-dlp/issues/6792)) by [zhgwn](https://github.com/zhgwn)
|
||||||
|
- **pornhub**: [Set access cookies to fix extraction](https://github.com/yt-dlp/yt-dlp/commit/62beefa818c75c20b6941389bb197051554a5d41) ([#6685](https://github.com/yt-dlp/yt-dlp/issues/6685)) by [arobase-che](https://github.com/arobase-che), [Schmoaaaaah](https://github.com/Schmoaaaaah)
|
||||||
|
- **rai**: [Rewrite extractors](https://github.com/yt-dlp/yt-dlp/commit/c6d3f81a4077aaf9cffc6aa2d0dec92f38e74bb0) ([#5940](https://github.com/yt-dlp/yt-dlp/issues/5940)) by [danog](https://github.com/danog), [nixxo](https://github.com/nixxo)
|
||||||
|
- **recurbate**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/c2502cfed91415c7ccfff925fd3404d230046484) ([#6297](https://github.com/yt-dlp/yt-dlp/issues/6297)) by [mrscrapy](https://github.com/mrscrapy)
|
||||||
|
- **reddit**
|
||||||
|
- [Add login support](https://github.com/yt-dlp/yt-dlp/commit/4d9280c9c853733534dda60486fa949bcca36c9e) ([#6950](https://github.com/yt-dlp/yt-dlp/issues/6950)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- [Support cookies and short URLs](https://github.com/yt-dlp/yt-dlp/commit/7a6f6f24592a8065376f11a58e44878807732cf6) ([#6825](https://github.com/yt-dlp/yt-dlp/issues/6825)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **rokfin**: [Re-construct manifest url](https://github.com/yt-dlp/yt-dlp/commit/7a6c8a0807941dd24fbf0d6172e811884f98e027) ([#6507](https://github.com/yt-dlp/yt-dlp/issues/6507)) by [vampirefrog](https://github.com/vampirefrog)
|
||||||
|
- **rottentomatoes**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/2d306c03d6f2697fcbabb7da35aa62cc078359d3) ([#6844](https://github.com/yt-dlp/yt-dlp/issues/6844)) by [JChris246](https://github.com/JChris246)
|
||||||
|
- **rozhlas**
|
||||||
|
- [Extract manifest formats](https://github.com/yt-dlp/yt-dlp/commit/e4cf7741f9302b3faa092962f2895b55cb3d89bb) ([#6590](https://github.com/yt-dlp/yt-dlp/issues/6590)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- `MujRozhlas`: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/c2b801fea59628d5c873e06a0727fbf2051bbd1f) ([#7129](https://github.com/yt-dlp/yt-dlp/issues/7129)) by [stanoarn](https://github.com/stanoarn)
|
||||||
|
- **rtvc**: [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/9b30cd3dfce83c2f0201b28a7a3ef44ab9722664) ([#6578](https://github.com/yt-dlp/yt-dlp/issues/6578)) by [elyse0](https://github.com/elyse0)
|
||||||
|
- **rumble**
|
||||||
|
- [Detect timeline format](https://github.com/yt-dlp/yt-dlp/commit/78bc1868ff3352108ab2911033d1ac67a55f151e) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Fix videos without quality selection](https://github.com/yt-dlp/yt-dlp/commit/6994afc030d2a786d8032075ed71a14d7eac5a4f) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- **sbs**: [Overhaul extractor for new API](https://github.com/yt-dlp/yt-dlp/commit/6a765f135ccb654861336ea27a2c1c24ea8e286f) ([#6839](https://github.com/yt-dlp/yt-dlp/issues/6839)) by [bashonly](https://github.com/bashonly), [dirkf](https://github.com/dirkf), [vidiot720](https://github.com/vidiot720)
|
||||||
|
- **shemaroome**: [Pass `stream_key` header to downloader](https://github.com/yt-dlp/yt-dlp/commit/7bc92517463f5766e9d9b92c3823b5cf403c0e3d) ([#7224](https://github.com/yt-dlp/yt-dlp/issues/7224)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **sonyliv**: [Fix login with token](https://github.com/yt-dlp/yt-dlp/commit/4815d35c191e7d375b94492a6486dd2ba43a8954) ([#7223](https://github.com/yt-dlp/yt-dlp/issues/7223)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **stageplus**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/e5265dc6517478e589ee3c1ff0cb19bdf4e35ce1) ([#6838](https://github.com/yt-dlp/yt-dlp/issues/6838)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **stripchat**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/f9213f8a2d7ba46b912afe1dd3ce6bb700a33d72) ([#7306](https://github.com/yt-dlp/yt-dlp/issues/7306)) by [foreignBlade](https://github.com/foreignBlade)
|
||||||
|
- **substack**: [Fix extraction](https://github.com/yt-dlp/yt-dlp/commit/12037d8b0a578fcc78a5c8f98964e48ee6060e25) ([#7218](https://github.com/yt-dlp/yt-dlp/issues/7218)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **sverigesradio**: [Support slug URLs](https://github.com/yt-dlp/yt-dlp/commit/5ee9a7d6e18ceea956e831994cf11c423979354f) ([#7220](https://github.com/yt-dlp/yt-dlp/issues/7220)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **tagesschau**: [Fix single audio urls](https://github.com/yt-dlp/yt-dlp/commit/af7585c824a1e405bd8afa46d87b4be322edc93c) ([#6626](https://github.com/yt-dlp/yt-dlp/issues/6626)) by [flashdagger](https://github.com/flashdagger)
|
||||||
|
- **teamcoco**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/c459d45dd4d417fb80a52e1a04e607776a44baa4) ([#6437](https://github.com/yt-dlp/yt-dlp/issues/6437)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **telecaribe**: [Expand livestream support](https://github.com/yt-dlp/yt-dlp/commit/69b2f838d3d3e37dc17367ef64d978db1bea45cf) ([#6601](https://github.com/yt-dlp/yt-dlp/issues/6601)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **tencent**: [Fix fatal metadata extraction](https://github.com/yt-dlp/yt-dlp/commit/971d901d129403e875a04dd92109507a03fbc070) ([#7219](https://github.com/yt-dlp/yt-dlp/issues/7219)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **thesun**: [Update `_VALID_URL`](https://github.com/yt-dlp/yt-dlp/commit/0181b9a1b31db3fde943f7cd3fe9662f23bff292) ([#6522](https://github.com/yt-dlp/yt-dlp/issues/6522)) by [hatienl0i261299](https://github.com/hatienl0i261299)
|
||||||
|
- **tiktok**
|
||||||
|
- [Extract 1080p adaptive formats](https://github.com/yt-dlp/yt-dlp/commit/c2a1bdb00931969193f2a31ea27b9c66a07aaec2) ([#7228](https://github.com/yt-dlp/yt-dlp/issues/7228)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- [Fix and improve metadata extraction](https://github.com/yt-dlp/yt-dlp/commit/925936908a3c3ee0e508621db14696b9f6a8b563) ([#6777](https://github.com/yt-dlp/yt-dlp/issues/6777)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- [Fix mp3 formats](https://github.com/yt-dlp/yt-dlp/commit/8ceb07e870424c219dced8f4348729553f05c5cc) ([#6615](https://github.com/yt-dlp/yt-dlp/issues/6615)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- [Fix resolution extraction](https://github.com/yt-dlp/yt-dlp/commit/ab6057ec80aa75db6303b8206916d00c376c622c) ([#7237](https://github.com/yt-dlp/yt-dlp/issues/7237)) by [puc9](https://github.com/puc9)
|
||||||
|
- [Improve `TikTokLive` extractor](https://github.com/yt-dlp/yt-dlp/commit/216bcb66d7dce0762767d751dad10650cb57da9d) ([#6520](https://github.com/yt-dlp/yt-dlp/issues/6520)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **triller**: [Support short URLs, detect removed videos](https://github.com/yt-dlp/yt-dlp/commit/33b737bedf8383c0d00d4e1d06a5273dcdfdb756) ([#6636](https://github.com/yt-dlp/yt-dlp/issues/6636)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **tv4**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/125ffaa1737dd04716f2f6fbb0595ad3eb7a4b1c) ([#5649](https://github.com/yt-dlp/yt-dlp/issues/5649)) by [dirkf](https://github.com/dirkf), [TxI5](https://github.com/TxI5)
|
||||||
|
- **tvp**: [Use new API](https://github.com/yt-dlp/yt-dlp/commit/0c7ce146e4d2a84e656d78f6857952bfd25ab389) ([#6989](https://github.com/yt-dlp/yt-dlp/issues/6989)) by [selfisekai](https://github.com/selfisekai)
|
||||||
|
- **tvplay**: [Remove outdated domains](https://github.com/yt-dlp/yt-dlp/commit/937264419f9bf375d5656785ae6e53282587c15d) ([#7106](https://github.com/yt-dlp/yt-dlp/issues/7106)) by [ivanskodje](https://github.com/ivanskodje)
|
||||||
|
- **twitch**
|
||||||
|
- [Extract original size thumbnail](https://github.com/yt-dlp/yt-dlp/commit/80b732b7a9585b2a61e456dc0d2d014a439cbaee) ([#6629](https://github.com/yt-dlp/yt-dlp/issues/6629)) by [JC-Chung](https://github.com/JC-Chung)
|
||||||
|
- [Fix `is_live`](https://github.com/yt-dlp/yt-dlp/commit/0551511b45f7847f40e4314aa9e624e80d086539) ([#6500](https://github.com/yt-dlp/yt-dlp/issues/6500)) by [elyse0](https://github.com/elyse0)
|
||||||
|
- [Support mobile clips](https://github.com/yt-dlp/yt-dlp/commit/02312c03cf53eb1da24c9ad022ee79af26060733) ([#6699](https://github.com/yt-dlp/yt-dlp/issues/6699)) by [bepvte](https://github.com/bepvte)
|
||||||
|
- [Update `_CLIENT_ID` and add extractor-arg](https://github.com/yt-dlp/yt-dlp/commit/01231feb142e80828985aabdec04ac608e3d43e2) ([#7200](https://github.com/yt-dlp/yt-dlp/issues/7200)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- vod: [Support links from schedule tab](https://github.com/yt-dlp/yt-dlp/commit/dbce5afa6bb61f6272ade613f2e9a3d66b88c7ea) ([#7071](https://github.com/yt-dlp/yt-dlp/issues/7071)) by [falbrechtskirchinger](https://github.com/falbrechtskirchinger)
|
||||||
|
- **twitter**
|
||||||
|
- [Add login support](https://github.com/yt-dlp/yt-dlp/commit/d1795f4a6af99c976c9d3ea2dabe5cf4f8965d3c) ([#7258](https://github.com/yt-dlp/yt-dlp/issues/7258)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- [Default to GraphQL, handle auth errors](https://github.com/yt-dlp/yt-dlp/commit/147e62fc584c3ea6fdb09bb7a47905df68553a22) ([#6957](https://github.com/yt-dlp/yt-dlp/issues/6957)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- spaces: [Add `release_timestamp`](https://github.com/yt-dlp/yt-dlp/commit/1c16d9df5330819cc79ad588b24aa5b72765c168) ([#7186](https://github.com/yt-dlp/yt-dlp/issues/7186)) by [CeruleanSky](https://github.com/CeruleanSky)
|
||||||
|
- **urplay**: [Extract all subtitles](https://github.com/yt-dlp/yt-dlp/commit/7bcd4813215ac98daa4949af2ffc677c78307a38) ([#7309](https://github.com/yt-dlp/yt-dlp/issues/7309)) by [hoaluvn](https://github.com/hoaluvn)
|
||||||
|
- **voot**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/4f7b11cc1c1cebf598107e00cd7295588ed484da) ([#7227](https://github.com/yt-dlp/yt-dlp/issues/7227)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **vrt**: [Overhaul extractors](https://github.com/yt-dlp/yt-dlp/commit/1a7dcca378e80a387923ee05c250d8ba122441c6) ([#6244](https://github.com/yt-dlp/yt-dlp/issues/6244)) by [bashonly](https://github.com/bashonly), [bergoid](https://github.com/bergoid), [jeroenj](https://github.com/jeroenj)
|
||||||
|
- **weverse**: [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/b844a3f8b16500663e7ab6c6ec061cc9b30f71ac) ([#6711](https://github.com/yt-dlp/yt-dlp/issues/6711)) by [bashonly](https://github.com/bashonly) (With fixes in [fd5d93f](https://github.com/yt-dlp/yt-dlp/commit/fd5d93f7040f9776fd541f4e4079dad7d3b3fb4f))
|
||||||
|
- **wevidi**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/1ea15603d852971ed7d92f4de12808b27b3d9370) ([#6868](https://github.com/yt-dlp/yt-dlp/issues/6868)) by [truedread](https://github.com/truedread)
|
||||||
|
- **weyyak**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/6dc00acf0f1f1107a626c21befd1691403e6aeeb) ([#7124](https://github.com/yt-dlp/yt-dlp/issues/7124)) by [ItzMaxTV](https://github.com/ItzMaxTV)
|
||||||
|
- **whyp**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/2c566ed14101673c651c08c306c30fa5b4010b85) ([#6803](https://github.com/yt-dlp/yt-dlp/issues/6803)) by [CoryTibbettsDev](https://github.com/CoryTibbettsDev)
|
||||||
|
- **wrestleuniverse**
|
||||||
|
- [Fix cookies support](https://github.com/yt-dlp/yt-dlp/commit/c8561c6d03f025268d6d3972abeb47987c8d7cbb) by [bashonly](https://github.com/bashonly)
|
||||||
|
- [Fix extraction, add login](https://github.com/yt-dlp/yt-dlp/commit/ef8fb7f029b816dfc95600727d84400591a3b5c5) ([#6982](https://github.com/yt-dlp/yt-dlp/issues/6982)) by [bashonly](https://github.com/bashonly), [Grub4K](https://github.com/Grub4K)
|
||||||
|
- **wykop**: [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/aed945e1b9b7d3af2a907e1a12e6508cc81d6a20) ([#6140](https://github.com/yt-dlp/yt-dlp/issues/6140)) by [selfisekai](https://github.com/selfisekai)
|
||||||
|
- **ximalaya**: [Sort playlist entries](https://github.com/yt-dlp/yt-dlp/commit/8790ea7b2536332777bce68590386b1aa935fac7) ([#7292](https://github.com/yt-dlp/yt-dlp/issues/7292)) by [linsui](https://github.com/linsui)
|
||||||
|
- **YahooGyaOIE, YahooGyaOPlayerIE**: [Delete extractors due to website close](https://github.com/yt-dlp/yt-dlp/commit/68be95bd0ca3f76aa63c9812935bd826b3a42e53) ([#6218](https://github.com/yt-dlp/yt-dlp/issues/6218)) by [Lesmiscore](https://github.com/Lesmiscore)
|
||||||
|
- **yappy**: YappyProfile: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/6f69101dc912690338d32e2aab085c32e44eba3f) ([#7346](https://github.com/yt-dlp/yt-dlp/issues/7346)) by [7vlad7](https://github.com/7vlad7)
|
||||||
|
- **youku**: [Improve error message](https://github.com/yt-dlp/yt-dlp/commit/ef0848abd425dfda6db62baa8d72897eefb0007f) ([#6690](https://github.com/yt-dlp/yt-dlp/issues/6690)) by [carusocr](https://github.com/carusocr)
|
||||||
|
- **youporn**: [Extract m3u8 formats](https://github.com/yt-dlp/yt-dlp/commit/ddae33754ae1f32dd9c64cf895c47d20f6b5f336) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- **youtube**
|
||||||
|
- [Add client name to `format_note` when `-v`](https://github.com/yt-dlp/yt-dlp/commit/c795c39f27244cbce846067891827e4847036441) ([#6254](https://github.com/yt-dlp/yt-dlp/issues/6254)) by [Lesmiscore](https://github.com/Lesmiscore), [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Add extractor-arg `include_duplicate_formats`](https://github.com/yt-dlp/yt-dlp/commit/86cb922118b236306310a72657f70426c20e28bb) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Bypass throttling for `-f17`](https://github.com/yt-dlp/yt-dlp/commit/c9abebb851e6188cb34b9eb744c1863dd46af919) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Construct fragment list lazily](https://github.com/yt-dlp/yt-dlp/commit/2a23d92d9ec44a0168079e38bcf3d383e5c4c7bb) by [pukkandan](https://github.com/pukkandan) (With fixes in [e389d17](https://github.com/yt-dlp/yt-dlp/commit/e389d172b6f42e4f332ae679dc48543fb7b9b61d))
|
||||||
|
- [Define strict uploader metadata mapping](https://github.com/yt-dlp/yt-dlp/commit/7666b93604b97e9ada981c6b04ccf5605dd1bd44) ([#6384](https://github.com/yt-dlp/yt-dlp/issues/6384)) by [coletdjnz](https://github.com/coletdjnz)
|
||||||
|
- [Determine audio language using automatic captions](https://github.com/yt-dlp/yt-dlp/commit/ff9b0e071ffae5543cc309e6f9e647ac51e5846e) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Extract `channel_is_verified`](https://github.com/yt-dlp/yt-dlp/commit/8213ce28a485e200f6a7e1af1434a987c8e702bd) ([#7213](https://github.com/yt-dlp/yt-dlp/issues/7213)) by [coletdjnz](https://github.com/coletdjnz)
|
||||||
|
- [Extract `heatmap` data](https://github.com/yt-dlp/yt-dlp/commit/5caf30dbc34f10b0be60676fece635b5c59f0d72) ([#7100](https://github.com/yt-dlp/yt-dlp/issues/7100)) by [tntmod54321](https://github.com/tntmod54321)
|
||||||
|
- [Extract more metadata for comments](https://github.com/yt-dlp/yt-dlp/commit/c35448b7b14113b35c4415dbfbf488c4731f006f) ([#7179](https://github.com/yt-dlp/yt-dlp/issues/7179)) by [coletdjnz](https://github.com/coletdjnz)
|
||||||
|
- [Extract uploader metadata for feed/playlist items](https://github.com/yt-dlp/yt-dlp/commit/93e12ed76ef49252dc6869b59d21d0777e5e11af) by [coletdjnz](https://github.com/coletdjnz)
|
||||||
|
- [Fix comment loop detection for pinned comments](https://github.com/yt-dlp/yt-dlp/commit/141a8dff98874a426d7fbe772e0a8421bb42656f) ([#6714](https://github.com/yt-dlp/yt-dlp/issues/6714)) by [coletdjnz](https://github.com/coletdjnz)
|
||||||
|
- [Fix continuation loop with no comments](https://github.com/yt-dlp/yt-dlp/commit/18f8fba7c89a87f99cc3313a1795848867e84fff) ([#7148](https://github.com/yt-dlp/yt-dlp/issues/7148)) by [coletdjnz](https://github.com/coletdjnz)
|
||||||
|
- [Fix parsing `comment_count`](https://github.com/yt-dlp/yt-dlp/commit/071670cbeaa01ddf2cc20a95ae6da25f8f086431) ([#6523](https://github.com/yt-dlp/yt-dlp/issues/6523)) by [nick-cd](https://github.com/nick-cd)
|
||||||
|
- [Handle incomplete initial data from watch page](https://github.com/yt-dlp/yt-dlp/commit/607510b9f2f67bfe7d33d74031a5c1fe22a24862) ([#6510](https://github.com/yt-dlp/yt-dlp/issues/6510)) by [coletdjnz](https://github.com/coletdjnz)
|
||||||
|
- [Ignore wrong fps of some formats](https://github.com/yt-dlp/yt-dlp/commit/97afb093d4cbe5df889145afa5f9ede4535e93e4) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Misc cleanup](https://github.com/yt-dlp/yt-dlp/commit/14a14335b280766fbf5a469ae26836d6c1fe450a) by [coletdjnz](https://github.com/coletdjnz)
|
||||||
|
- [Prioritize premium formats](https://github.com/yt-dlp/yt-dlp/commit/51a07b0dca4c079d58311c19b6d1c097c24bb021) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Revert default formats to `https`](https://github.com/yt-dlp/yt-dlp/commit/c6786ff3baaf72a5baa4d56d34058e54cbcf8ceb) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Support podcasts and releases tabs](https://github.com/yt-dlp/yt-dlp/commit/447afb9eaa65bc677e3245c83e53a8e69c174a3c) by [coletdjnz](https://github.com/coletdjnz)
|
||||||
|
- [Support shorter relative time format](https://github.com/yt-dlp/yt-dlp/commit/2fb35f6004c7625f0dd493da4a5abf0690f7777c) ([#7191](https://github.com/yt-dlp/yt-dlp/issues/7191)) by [coletdjnz](https://github.com/coletdjnz)
|
||||||
|
- music_search_url: [Extract title](https://github.com/yt-dlp/yt-dlp/commit/69a40e4a7f6caa5662527ebd2f3c4e8aa02857a2) ([#7102](https://github.com/yt-dlp/yt-dlp/issues/7102)) by [kangalio](https://github.com/kangalio)
|
||||||
|
- **zaiko**
|
||||||
|
- [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/345b4c0aedd9d19898ce00d5cef35fe0d277a052) ([#7254](https://github.com/yt-dlp/yt-dlp/issues/7254)) by [c-basalt](https://github.com/c-basalt)
|
||||||
|
- ZaikoETicket: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/5cc09c004bd5edbbada9b041c08a720cadc4f4df) ([#7347](https://github.com/yt-dlp/yt-dlp/issues/7347)) by [pzhlkj6612](https://github.com/pzhlkj6612)
|
||||||
|
- **zdf**: [Fix formats extraction](https://github.com/yt-dlp/yt-dlp/commit/ee0ed0338df328cd986f97315c8162b5a151476d) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **zee5**: [Fix extraction of new content](https://github.com/yt-dlp/yt-dlp/commit/9d7fde89a40360396f0baa2ee8bf507f92108b32) ([#7280](https://github.com/yt-dlp/yt-dlp/issues/7280)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **zingmp3**: [Fix and improve extractors](https://github.com/yt-dlp/yt-dlp/commit/17d7ca84ea723c20668bd9bfa938be7ea0e64f6b) ([#6367](https://github.com/yt-dlp/yt-dlp/issues/6367)) by [hatienl0i261299](https://github.com/hatienl0i261299)
|
||||||
|
- **zoom**
|
||||||
|
- [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/79c77e85b70ae3b9942d5a88c14d021a9bd24222) ([#6741](https://github.com/yt-dlp/yt-dlp/issues/6741)) by [shreyasminocha](https://github.com/shreyasminocha)
|
||||||
|
- [Fix share URL extraction](https://github.com/yt-dlp/yt-dlp/commit/90c1f5120694105496a6ad9e3ecfc6c25de6cae1) ([#6789](https://github.com/yt-dlp/yt-dlp/issues/6789)) by [bashonly](https://github.com/bashonly)
|
||||||
|
|
||||||
|
#### Downloader changes
|
||||||
|
- **curl**: [Fix progress reporting](https://github.com/yt-dlp/yt-dlp/commit/66aeaac9aa30b5959069ba84e53a5508232deb38) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- **fragment**: [Do not sleep between fragments](https://github.com/yt-dlp/yt-dlp/commit/424f3bf03305088df6e01d62f7311be8601ad3f4) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
|
||||||
|
#### Postprocessor changes
|
||||||
|
- [Fix chapters if duration is not extracted](https://github.com/yt-dlp/yt-dlp/commit/01ddec7e661bf90dc4c34e6924eb9d7629886cef) ([#6037](https://github.com/yt-dlp/yt-dlp/issues/6037)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- [Print newline for `--progress-template`](https://github.com/yt-dlp/yt-dlp/commit/13ff78095372fd98900a32572cf817994c07ccb5) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- **EmbedThumbnail, FFmpegMetadata**: [Fix error on attaching thumbnails and info json for mkv/mka](https://github.com/yt-dlp/yt-dlp/commit/0f0875ed555514f32522a0f30554fb08825d5124) ([#6647](https://github.com/yt-dlp/yt-dlp/issues/6647)) by [Lesmiscore](https://github.com/Lesmiscore)
|
||||||
|
- **FFmpegFixupM3u8PP**: [Check audio codec before fixup](https://github.com/yt-dlp/yt-dlp/commit/3f7e2bd80e3c5d8a1682f20a1b245fcd974f295d) ([#6778](https://github.com/yt-dlp/yt-dlp/issues/6778)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **FixupDuplicateMoov**: [Fix bug in triggering](https://github.com/yt-dlp/yt-dlp/commit/26010b5cec50193b98ad7845d1d77450f9f14c2b) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
|
||||||
|
#### Misc. changes
|
||||||
|
- [Add automatic duplicate issue detection](https://github.com/yt-dlp/yt-dlp/commit/15b2d3db1d40b0437fca79d8874d392aa54b3cdd) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- **build**
|
||||||
|
- [Fix macOS target](https://github.com/yt-dlp/yt-dlp/commit/44a79958f0b596ee71e1eb25f158610aada29d1b) by [Grub4K](https://github.com/Grub4K)
|
||||||
|
- [Implement build verification using `--update-to`](https://github.com/yt-dlp/yt-dlp/commit/b73193c99aa23b135732408a5fcf655c68d731c6) by [bashonly](https://github.com/bashonly), [Grub4K](https://github.com/Grub4K)
|
||||||
|
- [Pin `pyinstaller` version for MacOS](https://github.com/yt-dlp/yt-dlp/commit/427a8fafbb0e18c28d0ed7960be838d7b26b88d3) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Various build workflow improvements](https://github.com/yt-dlp/yt-dlp/commit/c4efa0aefec8daef1de62fd1693f13edf3c8b03c) by [bashonly](https://github.com/bashonly), [Grub4K](https://github.com/Grub4K)
|
||||||
|
- **cleanup**
|
||||||
|
- Miscellaneous
|
||||||
|
- [6f2287c](https://github.com/yt-dlp/yt-dlp/commit/6f2287cb18cbfb27518f068d868fa9390fee78ad) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [ad54c91](https://github.com/yt-dlp/yt-dlp/commit/ad54c9130e793ce433bf9da334fa80df9f3aee58) by [freezboltz](https://github.com/freezboltz), [mikf](https://github.com/mikf), [pukkandan](https://github.com/pukkandan)
|
||||||
|
- **cleanup, utils**: [Split into submodules](https://github.com/yt-dlp/yt-dlp/commit/69bec6730ec9d724bcedeab199d9d684d61423ba) ([#7090](https://github.com/yt-dlp/yt-dlp/issues/7090)) by [coletdjnz](https://github.com/coletdjnz), [pukkandan](https://github.com/pukkandan)
|
||||||
|
- **cli_to_api**: [Add script](https://github.com/yt-dlp/yt-dlp/commit/46f1370e9af6f8af8762f67e27e5acb8f0c48a47) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- **devscripts**: `make_changelog`: [Various improvements](https://github.com/yt-dlp/yt-dlp/commit/23c39a4beadee382060bb47fdaa21316ca707d38) by [Grub4K](https://github.com/Grub4K)
|
||||||
|
- **docs**: [Misc improvements](https://github.com/yt-dlp/yt-dlp/commit/c8bc203fbf3bb09914e53f0833eed622ab7edbb9) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
|
||||||
|
### 2023.03.04
|
||||||
|
|
||||||
|
#### Extractor changes
|
||||||
|
- bilibili
|
||||||
|
- [Fix for downloading wrong subtitles](https://github.com/yt-dlp/yt-dlp/commit/8a83baaf218ab89e6e7faa76b7c7be3a2ec19e3a) ([#6358](https://github.com/yt-dlp/yt-dlp/issues/6358)) by [LXYan2333](https://github.com/LXYan2333)
|
||||||
|
- ESPNcricinfo
|
||||||
|
- [Handle new URL pattern](https://github.com/yt-dlp/yt-dlp/commit/640c934823fc2d1ec77ec932566078014058635f) ([#6321](https://github.com/yt-dlp/yt-dlp/issues/6321)) by [venkata-krishnas](https://github.com/venkata-krishnas)
|
||||||
|
- lefigaro
|
||||||
|
- [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/eb8fd6d044e8926532772b72be0645c6b8ecb3aa) ([#6309](https://github.com/yt-dlp/yt-dlp/issues/6309)) by [elyse0](https://github.com/elyse0)
|
||||||
|
- lumni
|
||||||
|
- [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/1f8489cccbdc6e96027ef527b88717458f0900e8) ([#6302](https://github.com/yt-dlp/yt-dlp/issues/6302)) by [carusocr](https://github.com/carusocr)
|
||||||
|
- Prankcast
|
||||||
|
- [Fix tags](https://github.com/yt-dlp/yt-dlp/commit/ed4cc4ea793314c50ae3f82e98248c1de1c25694) ([#6316](https://github.com/yt-dlp/yt-dlp/issues/6316)) by [columndeeply](https://github.com/columndeeply)
|
||||||
|
- rutube
|
||||||
|
- [Extract chapters from description](https://github.com/yt-dlp/yt-dlp/commit/22ccd5420b3eb0782776071f12cccd1fedaa1fd0) ([#6345](https://github.com/yt-dlp/yt-dlp/issues/6345)) by [mushbite](https://github.com/mushbite)
|
||||||
|
- SportDeutschland
|
||||||
|
- [Rewrite extractor](https://github.com/yt-dlp/yt-dlp/commit/45db357289b4e1eec09093c8bc5446520378f426) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- telecaribe
|
||||||
|
- [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/b40471282286bd2b09c485bf79afd271d229272c) ([#6311](https://github.com/yt-dlp/yt-dlp/issues/6311)) by [elyse0](https://github.com/elyse0)
|
||||||
|
- tubetugraz
|
||||||
|
- [Support `--twofactor` (#6424)](https://github.com/yt-dlp/yt-dlp/commit/f44cb4e77bb9be8be291d02ab6f79dc0b4c0d4a1) ([#6427](https://github.com/yt-dlp/yt-dlp/issues/6427)) by [Ferdi265](https://github.com/Ferdi265)
|
||||||
|
- tunein
|
||||||
|
- [Fix extractors](https://github.com/yt-dlp/yt-dlp/commit/46580ced56c90b559885aded6aa8f46f20a9cdce) ([#6310](https://github.com/yt-dlp/yt-dlp/issues/6310)) by [elyse0](https://github.com/elyse0)
|
||||||
|
- twitch
|
||||||
|
- [Update for GraphQL API changes](https://github.com/yt-dlp/yt-dlp/commit/4a6272c6d1bff89969b67cd22b26ebe6d7e72279) ([#6318](https://github.com/yt-dlp/yt-dlp/issues/6318)) by [elyse0](https://github.com/elyse0)
|
||||||
|
- twitter
|
||||||
|
- [Fix retweet extraction](https://github.com/yt-dlp/yt-dlp/commit/cf605226521e99c89fc8dff26a319025810e63a0) ([#6422](https://github.com/yt-dlp/yt-dlp/issues/6422)) by [selfisekai](https://github.com/selfisekai)
|
||||||
|
- xvideos
|
||||||
|
- quickies: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/283a0b5bc511f3b350eead4488158f50c20ec526) ([#6414](https://github.com/yt-dlp/yt-dlp/issues/6414)) by [Yakabuff](https://github.com/Yakabuff)
|
||||||
|
|
||||||
|
#### Misc. changes
|
||||||
|
- build
|
||||||
|
- [Fix publishing to PyPI and homebrew](https://github.com/yt-dlp/yt-dlp/commit/55676fe498345a389a2539d8baaba958d6d61c3e) by [bashonly](https://github.com/bashonly)
|
||||||
|
- [Only archive if `vars.ARCHIVE_REPO` is set](https://github.com/yt-dlp/yt-dlp/commit/08ff6d59f97b5f5f0128f6bf6fbef56fd836cc52) by [Grub4K](https://github.com/Grub4K)
|
||||||
|
- cleanup
|
||||||
|
- Miscellaneous: [392389b](https://github.com/yt-dlp/yt-dlp/commit/392389b7df7b818f794b231f14dc396d4875fbad) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- devscripts
|
||||||
|
- `make_changelog`: [Stop at `Release ...` commit](https://github.com/yt-dlp/yt-dlp/commit/7accdd9845fe7ce9d0aa5a9d16faaa489c1294eb) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
|
||||||
|
### 2023.03.03
|
||||||
|
|
||||||
|
#### Important changes
|
||||||
|
- **A new release type has been added!**
|
||||||
|
* [`nightly`](https://github.com/yt-dlp/yt-dlp/releases/tag/nightly) builds will be made after each push, containing the latest fixes (but also possibly bugs).
|
||||||
|
* When using `--update`/`-U`, a release binary will only update to its current channel (either `stable` or `nightly`).
|
||||||
|
* The `--update-to` option has been added allowing the user more control over program upgrades (or downgrades).
|
||||||
|
* `--update-to` can change the release channel (`stable`, `nightly`) and also upgrade or downgrade to specific tags.
|
||||||
|
* **Usage**: `--update-to CHANNEL`, `--update-to TAG`, `--update-to CHANNEL@TAG`
|
||||||
|
- **YouTube throttling fixes!**
|
||||||
|
|
||||||
|
#### Core changes
|
||||||
|
- [Add option `--break-match-filters`](https://github.com/yt-dlp/yt-dlp/commit/fe2ce85aff0aa03735fc0152bb8cb9c3d4ef0753) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Fix `--break-on-existing` with `--lazy-playlist`](https://github.com/yt-dlp/yt-dlp/commit/d21056f4cf0a1623daa107f9181074f5725ac436) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- dependencies
|
||||||
|
- [Simplify `Cryptodome`](https://github.com/yt-dlp/yt-dlp/commit/65f6e807804d2af5e00f2aecd72bfc43af19324a) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- jsinterp
|
||||||
|
- [Handle `Date` at epoch 0](https://github.com/yt-dlp/yt-dlp/commit/9acf1ee25f7ad3920ede574a9de95b8c18626af4) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- plugins
|
||||||
|
- [Don't look in `.egg` directories](https://github.com/yt-dlp/yt-dlp/commit/b059188383eee4fa336ef728dda3ff4bb7335625) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- update
|
||||||
|
- [Add option `--update-to`, including to nightly](https://github.com/yt-dlp/yt-dlp/commit/77df20f14cc9ed41dfe3a1fe2d77fd27f5365a94) ([#6220](https://github.com/yt-dlp/yt-dlp/issues/6220)) by [bashonly](https://github.com/bashonly), [Grub4K](https://github.com/Grub4K), [pukkandan](https://github.com/pukkandan)
|
||||||
|
- utils
|
||||||
|
- `LenientJSONDecoder`: [Parse unclosed objects](https://github.com/yt-dlp/yt-dlp/commit/cc09083636ce21e58ff74f45eac2dbda507462b0) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- `Popen`: [Shim undocumented `text_mode` property](https://github.com/yt-dlp/yt-dlp/commit/da8e2912b165005f76779a115a071cd6132ceedf) by [Grub4K](https://github.com/Grub4K)
|
||||||
|
|
||||||
|
#### Extractor changes
|
||||||
|
- [Fix DRM detection in m3u8](https://github.com/yt-dlp/yt-dlp/commit/43a3eaf96393b712d60cbcf5c6cb1e90ed7f42f5) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- generic
|
||||||
|
- [Detect manifest links via extension](https://github.com/yt-dlp/yt-dlp/commit/b38cae49e6f4849c8ee2a774bdc3c1c647ae5f0e) by [bashonly](https://github.com/bashonly)
|
||||||
|
- [Handle basic-auth when checking redirects](https://github.com/yt-dlp/yt-dlp/commit/8e9fe43cd393e69fa49b3d842aa3180c1d105b8f) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- GoogleDrive
|
||||||
|
- [Fix some audio](https://github.com/yt-dlp/yt-dlp/commit/4d248e29d20d983ededab0b03d4fe69dff9eb4ed) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- iprima
|
||||||
|
- [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/9fddc12ab022a31754e0eaa358fc4e1dfa974587) ([#6291](https://github.com/yt-dlp/yt-dlp/issues/6291)) by [std-move](https://github.com/std-move)
|
||||||
|
- mediastream
|
||||||
|
- [Improve WinSports support](https://github.com/yt-dlp/yt-dlp/commit/2d5a8c5db2bd4ff1c2e45e00cd890a10f8ffca9e) ([#6401](https://github.com/yt-dlp/yt-dlp/issues/6401)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- ntvru
|
||||||
|
- [Extract HLS and DASH formats](https://github.com/yt-dlp/yt-dlp/commit/77d6d136468d0c23c8e79bc937898747804f585a) ([#6403](https://github.com/yt-dlp/yt-dlp/issues/6403)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- tencent
|
||||||
|
- [Add more formats and info](https://github.com/yt-dlp/yt-dlp/commit/18d295c9e0f95adc179eef345b7af64d6372db78) ([#5950](https://github.com/yt-dlp/yt-dlp/issues/5950)) by [Hill-98](https://github.com/Hill-98)
|
||||||
|
- yle_areena
|
||||||
|
- [Extract non-Kaltura videos](https://github.com/yt-dlp/yt-dlp/commit/40d77d89027cd0e0ce31d22aec81db3e1d433900) ([#6402](https://github.com/yt-dlp/yt-dlp/issues/6402)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- youtube
|
||||||
|
- [Construct dash formats with `range` query](https://github.com/yt-dlp/yt-dlp/commit/5038f6d713303e0967d002216e7a88652401c22a) by [pukkandan](https://github.com/pukkandan) (With fixes in [f34804b](https://github.com/yt-dlp/yt-dlp/commit/f34804b2f920f62a6e893a14a9e2a2144b14dd23) by [bashonly](https://github.com/bashonly), [coletdjnz](https://github.com/coletdjnz))
|
||||||
|
- [Detect and break on looping comments](https://github.com/yt-dlp/yt-dlp/commit/7f51861b1820c37b157a239b1fe30628d907c034) ([#6301](https://github.com/yt-dlp/yt-dlp/issues/6301)) by [coletdjnz](https://github.com/coletdjnz)
|
||||||
|
- [Extract channel `view_count` when `/about` tab is passed](https://github.com/yt-dlp/yt-dlp/commit/31e183557fcd1b937582f9429f29207c1261f501) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
|
||||||
|
#### Misc. changes
|
||||||
|
- build
|
||||||
|
- [Add `cffi` as a dependency for `yt_dlp_linux`](https://github.com/yt-dlp/yt-dlp/commit/776d1c3f0c9b00399896dd2e40e78e9a43218109) by [bashonly](https://github.com/bashonly)
|
||||||
|
- [Automated builds and nightly releases](https://github.com/yt-dlp/yt-dlp/commit/29cb20bd563c02671b31dd840139e93dd37150a1) ([#6220](https://github.com/yt-dlp/yt-dlp/issues/6220)) by [bashonly](https://github.com/bashonly), [Grub4K](https://github.com/Grub4K) (With fixes in [bfc861a](https://github.com/yt-dlp/yt-dlp/commit/bfc861a91ee65c9b0ac169754f512e052c6827cf) by [pukkandan](https://github.com/pukkandan))
|
||||||
|
- [Sign SHA files and release public key](https://github.com/yt-dlp/yt-dlp/commit/12647e03d417feaa9ea6a458bea5ebd747494a53) by [Grub4K](https://github.com/Grub4K)
|
||||||
|
- cleanup
|
||||||
|
- [Fix `Changelog`](https://github.com/yt-dlp/yt-dlp/commit/17ca19ab60a6a13eb8a629c51442b5248b0d8394) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- jsinterp: [Give functions names to help debugging](https://github.com/yt-dlp/yt-dlp/commit/b2e0343ba0fc5d8702e90f6ba2b71358e2677e0b) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- Miscellaneous: [4815bbf](https://github.com/yt-dlp/yt-dlp/commit/4815bbfc41cf641e4a0650289dbff968cb3bde76), [5b28cef](https://github.com/yt-dlp/yt-dlp/commit/5b28cef72db3b531680d89c121631c73ae05354f) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- devscripts
|
||||||
|
- [Script to generate changelog](https://github.com/yt-dlp/yt-dlp/commit/d400e261cf029a3f20d364113b14de973be75404) ([#6220](https://github.com/yt-dlp/yt-dlp/issues/6220)) by [Grub4K](https://github.com/Grub4K) (With fixes in [9344964](https://github.com/yt-dlp/yt-dlp/commit/93449642815a6973a4b09b289982ca7e1f961b5f))
|
||||||
|
|
||||||
|
### 2023.02.17
|
||||||
|
|
||||||
|
* Merge youtube-dl: Upto [commit/2dd6c6e](https://github.com/ytdl-org/youtube-dl/commit/2dd6c6e)
|
||||||
|
* Fix `--concat-playlist`
|
||||||
|
* Imply `--no-progress` when `--print`
|
||||||
|
* Improve default subtitle language selection by [sdht0](https://github.com/sdht0)
|
||||||
|
* Make `title` completely non-fatal
|
||||||
|
* Sanitize formats before sorting by [pukkandan](https://github.com/pukkandan)
|
||||||
|
* Support module level `__bool__` and `property`
|
||||||
|
* [dependencies] Standardize `Cryptodome` imports
|
||||||
|
* [hls] Allow extractors to provide AES key by [Grub4K](https://github.com/Grub4K), [bashonly](https://github.com/bashonly)
|
||||||
|
* [ExtractAudio] Handle outtmpl without ext by [carusocr](https://github.com/carusocr)
|
||||||
|
* [extractor/common] Fix `_search_nuxt_data` by [LowSuggestion912](https://github.com/LowSuggestion912)
|
||||||
|
* [extractor/generic] Avoid catastrophic backtracking in KVS regex by [bashonly](https://github.com/bashonly)
|
||||||
|
* [jsinterp] Support `if` statements
|
||||||
|
* [plugins] Fix zip search paths
|
||||||
|
* [utils] `traverse_obj`: Various improvements by [Grub4K](https://github.com/Grub4K)
|
||||||
|
* [utils] `traverse_obj`: Fix more bugs
|
||||||
|
* [utils] `traverse_obj`: Fix several behavioral problems by [Grub4K](https://github.com/Grub4K)
|
||||||
|
* [utils] Don't use Content-length with encoding by [felixonmars](https://github.com/felixonmars)
|
||||||
|
* [utils] Fix `time_seconds` to use the provided TZ by [Grub4K](https://github.com/Grub4K), [Lesmiscore](https://github.com/Lesmiscore)
|
||||||
|
* [utils] Fix race condition in `make_dir` by [aionescu](https://github.com/aionescu)
|
||||||
|
* [utils] Use local kernel32 for file locking on Windows by [Grub4K](https://github.com/Grub4K)
|
||||||
|
* [compat_utils] Improve `passthrough_module`
|
||||||
|
* [compat_utils] Simplify `EnhancedModule`
|
||||||
|
* [build] Update pyinstaller
|
||||||
|
* [pyinst] Fix for pyinstaller 5.8
|
||||||
|
* [devscripts] Provide `pyinstaller` hooks
|
||||||
|
* [devscripts/pyinstaller] Analyze sub-modules of `Cryptodome`
|
||||||
|
* [cleanup] Misc fixes and cleanup
|
||||||
|
* [extractor/anchorfm] Add episode extractor by [HobbyistDev](https://github.com/HobbyistDev), [bashonly](https://github.com/bashonly)
|
||||||
|
* [extractor/boxcast] Add extractor by [HobbyistDev](https://github.com/HobbyistDev)
|
||||||
|
* [extractor/ebay] Add extractor by [JChris246](https://github.com/JChris246)
|
||||||
|
* [extractor/hypergryph] Add extractor by [HobbyistDev](https://github.com/HobbyistDev), [bashonly](https://github.com/bashonly)
|
||||||
|
* [extractor/NZOnScreen] Add extractor by [gregsadetsky](https://github.com/gregsadetsky), [pukkandan](https://github.com/pukkandan)
|
||||||
|
* [extractor/rozhlas] Add extractor RozhlasVltavaIE by [amra](https://github.com/amra)
|
||||||
|
* [extractor/tempo] Add IVXPlayer extractor by [HobbyistDev](https://github.com/HobbyistDev)
|
||||||
|
* [extractor/txxx] Add extractors by [chio0hai](https://github.com/chio0hai)
|
||||||
|
* [extractor/vocaroo] Add extractor by [SuperSonicHub1](https://github.com/SuperSonicHub1), [qbnu](https://github.com/qbnu)
|
||||||
|
* [extractor/wrestleuniverse] Add extractors by [Grub4K](https://github.com/Grub4K), [bashonly](https://github.com/bashonly)
|
||||||
|
* [extractor/yappy] Add extractor by [HobbyistDev](https://github.com/HobbyistDev), [dirkf](https://github.com/dirkf)
|
||||||
|
* [extractor/youtube] **Fix `uploader_id` extraction** by [bashonly](https://github.com/bashonly)
|
||||||
|
* [extractor/youtube] Add hyperpipe instances by [Generator](https://github.com/Generator)
|
||||||
|
* [extractor/youtube] Handle `consent.youtube`
|
||||||
|
* [extractor/youtube] Support `/live/` URL
|
||||||
|
* [extractor/youtube] Update invidious and piped instances by [rohieb](https://github.com/rohieb)
|
||||||
|
* [extractor/91porn] Fix title and comment extraction by [pmitchell86](https://github.com/pmitchell86)
|
||||||
|
* [extractor/AbemaTV] Cache user token whenever appropriate by [Lesmiscore](https://github.com/Lesmiscore)
|
||||||
|
* [extractor/bfmtv] Support `rmc` prefix by [carusocr](https://github.com/carusocr)
|
||||||
|
* [extractor/biliintl] Add intro and ending chapters by [HobbyistDev](https://github.com/HobbyistDev)
|
||||||
|
* [extractor/clyp] Support `wav` by [qulaz](https://github.com/qulaz)
|
||||||
|
* [extractor/crunchyroll] Add intro chapter by [ByteDream](https://github.com/ByteDream)
|
||||||
|
* [extractor/crunchyroll] Better message for premium videos
|
||||||
|
* [extractor/crunchyroll] Fix incorrect premium-only error by [Grub4K](https://github.com/Grub4K)
|
||||||
|
* [extractor/DouyuTV] Use new API by [hatienl0i261299](https://github.com/hatienl0i261299)
|
||||||
|
* [extractor/embedly] Embedded links may be for other extractors
|
||||||
|
* [extractor/freesound] Workaround invalid URL in webpage by [rebane2001](https://github.com/rebane2001)
|
||||||
|
* [extractor/GoPlay] Use new API by [jeroenj](https://github.com/jeroenj)
|
||||||
|
* [extractor/Hidive] Fix subtitles and age-restriction by [chexxor](https://github.com/chexxor)
|
||||||
|
* [extractor/huya] Support HD streams by [felixonmars](https://github.com/felixonmars)
|
||||||
|
* [extractor/moviepilot] Fix extractor by [panatexxa](https://github.com/panatexxa)
|
||||||
|
* [extractor/nbc] Fix `NBC` and `NBCStations` extractors by [bashonly](https://github.com/bashonly)
|
||||||
|
* [extractor/nbc] Fix XML parsing by [bashonly](https://github.com/bashonly)
|
||||||
|
* [extractor/nebula] Remove broken cookie support by [hheimbuerger](https://github.com/hheimbuerger)
|
||||||
|
* [extractor/nfl] Add `NFLPlus` extractors by [bashonly](https://github.com/bashonly)
|
||||||
|
* [extractor/niconico] Add support for like history by [Matumo](https://github.com/Matumo), [pukkandan](https://github.com/pukkandan)
|
||||||
|
* [extractor/nitter] Update instance list by [OIRNOIR](https://github.com/OIRNOIR)
|
||||||
|
* [extractor/npo] Fix extractor and add HD support by [seproDev](https://github.com/seproDev)
|
||||||
|
* [extractor/odkmedia] Add `OnDemandChinaEpisodeIE` by [HobbyistDev](https://github.com/HobbyistDev), [pukkandan](https://github.com/pukkandan)
|
||||||
|
* [extractor/pornez] Handle relative URLs in iframe by [JChris246](https://github.com/JChris246)
|
||||||
|
* [extractor/radiko] Fix format sorting for Time Free by [road-master](https://github.com/road-master)
|
||||||
|
* [extractor/rcs] Fix extractors by [nixxo](https://github.com/nixxo), [pukkandan](https://github.com/pukkandan)
|
||||||
|
* [extractor/reddit] Support user posts by [OMEGARAZER](https://github.com/OMEGARAZER)
|
||||||
|
* [extractor/rumble] Fix format sorting by [pukkandan](https://github.com/pukkandan)
|
||||||
|
* [extractor/servus] Rewrite extractor by [Ashish0804](https://github.com/Ashish0804), [FrankZ85](https://github.com/FrankZ85), [StefanLobbenmeier](https://github.com/StefanLobbenmeier)
|
||||||
|
* [extractor/slideslive] Fix slides and chapters/duration by [bashonly](https://github.com/bashonly)
|
||||||
|
* [extractor/SportDeutschland] Fix extractor by [FriedrichRehren](https://github.com/FriedrichRehren)
|
||||||
|
* [extractor/Stripchat] Fix extractor by [JChris246](https://github.com/JChris246), [bashonly](https://github.com/bashonly)
|
||||||
|
* [extractor/tnaflix] Fix extractor by [bashonly](https://github.com/bashonly), [oxamun](https://github.com/oxamun)
|
||||||
|
* [extractor/tvp] Support `stream.tvp.pl` by [selfisekai](https://github.com/selfisekai)
|
||||||
|
* [extractor/twitter] Fix `--no-playlist` and add media `view_count` when using GraphQL by [Grub4K](https://github.com/Grub4K)
|
||||||
|
* [extractor/twitter] Fix graphql extraction on some tweets by [selfisekai](https://github.com/selfisekai)
|
||||||
|
* [extractor/vimeo] Fix `playerConfig` extraction by [LeoniePhiline](https://github.com/LeoniePhiline), [bashonly](https://github.com/bashonly)
|
||||||
|
* [extractor/viu] Add `ViuOTTIndonesiaIE` extractor by [HobbyistDev](https://github.com/HobbyistDev)
|
||||||
|
* [extractor/vk] Fix playlists for new API by [the-marenga](https://github.com/the-marenga)
|
||||||
|
* [extractor/vlive] Replace with `VLiveWebArchiveIE` by [seproDev](https://github.com/seproDev)
|
||||||
|
* [extractor/ximalaya] Update album `_VALID_URL` by [carusocr](https://github.com/carusocr)
|
||||||
|
* [extractor/zdf] Use android API endpoint for UHD downloads by [seproDev](https://github.com/seproDev)
|
||||||
|
* [extractor/drtv] Fix bug in [ab4cbef](https://github.com/yt-dlp/yt-dlp/commit/ab4cbef) by [bashonly](https://github.com/bashonly)
|
||||||
|
|
||||||
|
|
||||||
### 2023.01.06
|
### 2023.01.06
|
||||||
|
|
||||||
* Fix config locations by [Grub4k](https://github.com/Grub4k), [coletdjnz](https://github.com/coletdjnz), [pukkandan](https://github.com/pukkandan)
|
* Fix config locations by [Grub4K](https://github.com/Grub4K), [coletdjnz](https://github.com/coletdjnz), [pukkandan](https://github.com/pukkandan)
|
||||||
* [downloader/aria2c] Disable native progress
|
* [downloader/aria2c] Disable native progress
|
||||||
* [utils] `mimetype2ext`: `weba` is not standard
|
* [utils] `mimetype2ext`: `weba` is not standard
|
||||||
* [utils] `windows_enable_vt_mode`: Better error handling
|
* [utils] `windows_enable_vt_mode`: Better error handling
|
||||||
@ -40,7 +532,7 @@ ### 2023.01.02
|
|||||||
* Add `--compat-options 2021,2022`
|
* Add `--compat-options 2021,2022`
|
||||||
* This allows devs to change defaults and make other potentially breaking changes more easily. If you need everything to work exactly as-is, put Use `--compat 2022` in your config to guard against future compat changes.
|
* This allows devs to change defaults and make other potentially breaking changes more easily. If you need everything to work exactly as-is, put Use `--compat 2022` in your config to guard against future compat changes.
|
||||||
* [downloader/aria2c] Native progress for aria2c via RPC by [Lesmiscore](https://github.com/Lesmiscore), [pukkandan](https://github.com/pukkandan)
|
* [downloader/aria2c] Native progress for aria2c via RPC by [Lesmiscore](https://github.com/Lesmiscore), [pukkandan](https://github.com/pukkandan)
|
||||||
* Merge youtube-dl: Upto [commit/195f22f](https://github.com/ytdl-org/youtube-dl/commit/195f22f6) by [Grub4k](https://github.com/Grub4k), [pukkandan](https://github.com/pukkandan)
|
* Merge youtube-dl: Upto [commit/195f22f](https://github.com/ytdl-org/youtube-dl/commit/195f22f6) by [Grub4K](https://github.com/Grub4K), [pukkandan](https://github.com/pukkandan)
|
||||||
* Add pre-processor stage `video`
|
* Add pre-processor stage `video`
|
||||||
* Let `--parse/replace-in-metadata` run at any post-processing stage
|
* Let `--parse/replace-in-metadata` run at any post-processing stage
|
||||||
* Add `--enable-file-urls` by [coletdjnz](https://github.com/coletdjnz)
|
* Add `--enable-file-urls` by [coletdjnz](https://github.com/coletdjnz)
|
||||||
@ -155,7 +647,7 @@ ### 2023.01.02
|
|||||||
* [extractor/udemy] Fix lectures that have no URL and detect DRM
|
* [extractor/udemy] Fix lectures that have no URL and detect DRM
|
||||||
* [extractor/unsupported] Add more URLs
|
* [extractor/unsupported] Add more URLs
|
||||||
* [extractor/urplay] Support for audio-only formats by [barsnick](https://github.com/barsnick)
|
* [extractor/urplay] Support for audio-only formats by [barsnick](https://github.com/barsnick)
|
||||||
* [extractor/wistia] Improve extension detection by [Grub4k](https://github.com/Grub4k), [bashonly](https://github.com/bashonly), [pukkandan](https://github.com/pukkandan)
|
* [extractor/wistia] Improve extension detection by [Grub4K](https://github.com/Grub4K), [bashonly](https://github.com/bashonly), [pukkandan](https://github.com/pukkandan)
|
||||||
* [extractor/yle_areena] Support restricted videos by [docbender](https://github.com/docbender)
|
* [extractor/yle_areena] Support restricted videos by [docbender](https://github.com/docbender)
|
||||||
* [extractor/youku] Fix extractor by [KurtBestor](https://github.com/KurtBestor)
|
* [extractor/youku] Fix extractor by [KurtBestor](https://github.com/KurtBestor)
|
||||||
* [extractor/youporn] Fix metadata by [marieell](https://github.com/marieell)
|
* [extractor/youporn] Fix metadata by [marieell](https://github.com/marieell)
|
||||||
|
@ -8,6 +8,7 @@ # Collaborators
|
|||||||
## [pukkandan](https://github.com/pukkandan)
|
## [pukkandan](https://github.com/pukkandan)
|
||||||
|
|
||||||
[![ko-fi](https://img.shields.io/badge/_-Ko--fi-red.svg?logo=kofi&labelColor=555555&style=for-the-badge)](https://ko-fi.com/pukkandan)
|
[![ko-fi](https://img.shields.io/badge/_-Ko--fi-red.svg?logo=kofi&labelColor=555555&style=for-the-badge)](https://ko-fi.com/pukkandan)
|
||||||
|
[![gh-sponsor](https://img.shields.io/badge/_-Github-white.svg?logo=github&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/pukkandan)
|
||||||
|
|
||||||
* Owner of the fork
|
* Owner of the fork
|
||||||
|
|
||||||
@ -25,8 +26,9 @@ ## [shirt](https://github.com/shirt-dev)
|
|||||||
|
|
||||||
## [coletdjnz](https://github.com/coletdjnz)
|
## [coletdjnz](https://github.com/coletdjnz)
|
||||||
|
|
||||||
[![gh-sponsor](https://img.shields.io/badge/_-Sponsor-red.svg?logo=githubsponsors&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/coletdjnz)
|
[![gh-sponsor](https://img.shields.io/badge/_-Github-white.svg?logo=github&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/coletdjnz)
|
||||||
|
|
||||||
|
* Improved plugin architecture
|
||||||
* YouTube improvements including: age-gate bypass, private playlists, multiple-clients (to avoid throttling) and a lot of under-the-hood improvements
|
* YouTube improvements including: age-gate bypass, private playlists, multiple-clients (to avoid throttling) and a lot of under-the-hood improvements
|
||||||
* Added support for new websites YoutubeWebArchive, MainStreaming, PRX, nzherald, Mediaklikk, StarTV etc
|
* Added support for new websites YoutubeWebArchive, MainStreaming, PRX, nzherald, Mediaklikk, StarTV etc
|
||||||
* Improved/fixed support for Patreon, panopto, gfycat, itv, pbs, SouthParkDE etc
|
* Improved/fixed support for Patreon, panopto, gfycat, itv, pbs, SouthParkDE etc
|
||||||
@ -42,7 +44,7 @@ ## [Ashish0804](https://github.com/Ashish0804) <sub><sup>[Inactive]</sup></sub>
|
|||||||
* Improved/fixed support for HiDive, HotStar, Hungama, LBRY, LinkedInLearning, Mxplayer, SonyLiv, TV2, Vimeo, VLive etc
|
* Improved/fixed support for HiDive, HotStar, Hungama, LBRY, LinkedInLearning, Mxplayer, SonyLiv, TV2, Vimeo, VLive etc
|
||||||
|
|
||||||
|
|
||||||
## [Lesmiscore](https://github.com/Lesmiscore) <sub><sup>(nao20010128nao)</sup></sub>
|
## [Lesmiscore](https://github.com/Lesmiscore)
|
||||||
|
|
||||||
**Bitcoin**: bc1qfd02r007cutfdjwjmyy9w23rjvtls6ncve7r3s
|
**Bitcoin**: bc1qfd02r007cutfdjwjmyy9w23rjvtls6ncve7r3s
|
||||||
**Monacoin**: mona1q3tf7dzvshrhfe3md379xtvt2n22duhglv5dskr
|
**Monacoin**: mona1q3tf7dzvshrhfe3md379xtvt2n22duhglv5dskr
|
||||||
@ -54,6 +56,16 @@ ## [Lesmiscore](https://github.com/Lesmiscore) <sub><sup>(nao20010128nao)</sup><
|
|||||||
|
|
||||||
## [bashonly](https://github.com/bashonly)
|
## [bashonly](https://github.com/bashonly)
|
||||||
|
|
||||||
|
* `--update-to`, automated release, nightly builds
|
||||||
* `--cookies-from-browser` support for Firefox containers
|
* `--cookies-from-browser` support for Firefox containers
|
||||||
* Added support for new websites Genius, Kick, NBCStations, Triller, VideoKen etc
|
* Added support for new websites Genius, Kick, NBCStations, Triller, VideoKen etc
|
||||||
* Improved/fixed support for Anvato, Brightcove, Instagram, ParamountPlus, Reddit, SlidesLive, TikTok, Twitter, Vimeo etc
|
* Improved/fixed support for Anvato, Brightcove, Instagram, ParamountPlus, Reddit, SlidesLive, TikTok, Twitter, Vimeo etc
|
||||||
|
|
||||||
|
|
||||||
|
## [Grub4K](https://github.com/Grub4K)
|
||||||
|
|
||||||
|
[![ko-fi](https://img.shields.io/badge/_-Ko--fi-red.svg?logo=kofi&labelColor=555555&style=for-the-badge)](https://ko-fi.com/Grub4K) [![gh-sponsor](https://img.shields.io/badge/_-Github-white.svg?logo=github&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/Grub4K)
|
||||||
|
|
||||||
|
* `--update-to`, automated release, nightly builds
|
||||||
|
* Rework internals like `traverse_obj`, various core refactors and bugs fixes
|
||||||
|
* Helped fix crunchyroll, Twitter, wrestleuniverse, wistia, slideslive etc
|
||||||
|
2
Makefile
2
Makefile
@ -74,7 +74,7 @@ offlinetest: codetest
|
|||||||
$(PYTHON) -m pytest -k "not download"
|
$(PYTHON) -m pytest -k "not download"
|
||||||
|
|
||||||
# XXX: This is hard to maintain
|
# XXX: This is hard to maintain
|
||||||
CODE_FOLDERS = yt_dlp yt_dlp/downloader yt_dlp/extractor yt_dlp/postprocessor yt_dlp/compat
|
CODE_FOLDERS = yt_dlp yt_dlp/downloader yt_dlp/extractor yt_dlp/postprocessor yt_dlp/compat yt_dlp/compat/urllib yt_dlp/utils yt_dlp/dependencies
|
||||||
yt-dlp: yt_dlp/*.py yt_dlp/*/*.py
|
yt-dlp: yt_dlp/*.py yt_dlp/*/*.py
|
||||||
mkdir -p zip
|
mkdir -p zip
|
||||||
for d in $(CODE_FOLDERS) ; do \
|
for d in $(CODE_FOLDERS) ; do \
|
||||||
|
209
README.md
209
README.md
@ -49,7 +49,7 @@
|
|||||||
* [Extractor Options](#extractor-options)
|
* [Extractor Options](#extractor-options)
|
||||||
* [CONFIGURATION](#configuration)
|
* [CONFIGURATION](#configuration)
|
||||||
* [Configuration file encoding](#configuration-file-encoding)
|
* [Configuration file encoding](#configuration-file-encoding)
|
||||||
* [Authentication with .netrc file](#authentication-with-netrc-file)
|
* [Authentication with netrc](#authentication-with-netrc)
|
||||||
* [Notes about environment variables](#notes-about-environment-variables)
|
* [Notes about environment variables](#notes-about-environment-variables)
|
||||||
* [OUTPUT TEMPLATE](#output-template)
|
* [OUTPUT TEMPLATE](#output-template)
|
||||||
* [Output template examples](#output-template-examples)
|
* [Output template examples](#output-template-examples)
|
||||||
@ -76,7 +76,7 @@
|
|||||||
|
|
||||||
# NEW FEATURES
|
# NEW FEATURES
|
||||||
|
|
||||||
* Merged with **youtube-dl v2021.12.17+ [commit/195f22f](https://github.com/ytdl-org/youtube-dl/commit/195f22f)** <!--([exceptions](https://github.com/yt-dlp/yt-dlp/issues/21))--> and **youtube-dlc v2020.11.11-3+ [commit/f9401f2](https://github.com/blackjack4494/yt-dlc/commit/f9401f2a91987068139c5f757b12fc711d4c0cee)**: You get all the features and patches of [youtube-dlc](https://github.com/blackjack4494/yt-dlc) in addition to the latest [youtube-dl](https://github.com/ytdl-org/youtube-dl)
|
* Forked from [**yt-dlc@f9401f2**](https://github.com/blackjack4494/yt-dlc/commit/f9401f2a91987068139c5f757b12fc711d4c0cee) and merged with [**youtube-dl@42f2d4**](https://github.com/yt-dlp/yt-dlp/commit/42f2d4) ([exceptions](https://github.com/yt-dlp/yt-dlp/issues/21))
|
||||||
|
|
||||||
* **[SponsorBlock Integration](#sponsorblock-options)**: You can mark/remove sponsor sections in YouTube videos by utilizing the [SponsorBlock](https://sponsor.ajay.app) API
|
* **[SponsorBlock Integration](#sponsorblock-options)**: You can mark/remove sponsor sections in YouTube videos by utilizing the [SponsorBlock](https://sponsor.ajay.app) API
|
||||||
|
|
||||||
@ -85,7 +85,7 @@ # NEW FEATURES
|
|||||||
* **Merged with animelover1984/youtube-dl**: You get most of the features and improvements from [animelover1984/youtube-dl](https://github.com/animelover1984/youtube-dl) including `--write-comments`, `BiliBiliSearch`, `BilibiliChannel`, Embedding thumbnail in mp4/ogg/opus, playlist infojson etc. Note that NicoNico livestreams are not available. See [#31](https://github.com/yt-dlp/yt-dlp/pull/31) for details.
|
* **Merged with animelover1984/youtube-dl**: You get most of the features and improvements from [animelover1984/youtube-dl](https://github.com/animelover1984/youtube-dl) including `--write-comments`, `BiliBiliSearch`, `BilibiliChannel`, Embedding thumbnail in mp4/ogg/opus, playlist infojson etc. Note that NicoNico livestreams are not available. See [#31](https://github.com/yt-dlp/yt-dlp/pull/31) for details.
|
||||||
|
|
||||||
* **YouTube improvements**:
|
* **YouTube improvements**:
|
||||||
* Supports Clips, Stories (`ytstories:<channel UCID>`), Search (including filters)**\***, YouTube Music Search, Channel-specific search, Search prefixes (`ytsearch:`, `ytsearchdate:`)**\***, Mixes, YouTube Music Albums/Channels ([except self-uploaded music](https://github.com/yt-dlp/yt-dlp/issues/723)), and Feeds (`:ytfav`, `:ytwatchlater`, `:ytsubs`, `:ythistory`, `:ytrec`, `:ytnotif`)
|
* Supports Clips, Stories (`ytstories:<channel UCID>`), Search (including filters)**\***, YouTube Music Search, Channel-specific search, Search prefixes (`ytsearch:`, `ytsearchdate:`)**\***, Mixes, and Feeds (`:ytfav`, `:ytwatchlater`, `:ytsubs`, `:ythistory`, `:ytrec`, `:ytnotif`)
|
||||||
* Fix for [n-sig based throttling](https://github.com/ytdl-org/youtube-dl/issues/29326) **\***
|
* Fix for [n-sig based throttling](https://github.com/ytdl-org/youtube-dl/issues/29326) **\***
|
||||||
* Supports some (but not all) age-gated content without cookies
|
* Supports some (but not all) age-gated content without cookies
|
||||||
* Download livestreams from the start using `--live-from-start` (*experimental*)
|
* Download livestreams from the start using `--live-from-start` (*experimental*)
|
||||||
@ -114,13 +114,15 @@ # NEW FEATURES
|
|||||||
|
|
||||||
* **Output template improvements**: Output templates can now have date-time formatting, numeric offsets, object traversal etc. See [output template](#output-template) for details. Even more advanced operations can also be done with the help of `--parse-metadata` and `--replace-in-metadata`
|
* **Output template improvements**: Output templates can now have date-time formatting, numeric offsets, object traversal etc. See [output template](#output-template) for details. Even more advanced operations can also be done with the help of `--parse-metadata` and `--replace-in-metadata`
|
||||||
|
|
||||||
* **Other new options**: Many new options have been added such as `--alias`, `--print`, `--concat-playlist`, `--wait-for-video`, `--retry-sleep`, `--sleep-requests`, `--convert-thumbnails`, `--force-download-archive`, `--force-overwrites`, `--break-on-reject` etc
|
* **Other new options**: Many new options have been added such as `--alias`, `--print`, `--concat-playlist`, `--wait-for-video`, `--retry-sleep`, `--sleep-requests`, `--convert-thumbnails`, `--force-download-archive`, `--force-overwrites`, `--break-match-filter` etc
|
||||||
|
|
||||||
* **Improvements**: Regex and other operators in `--format`/`--match-filter`, multiple `--postprocessor-args` and `--downloader-args`, faster archive checking, more [format selection options](#format-selection), merge multi-video/audio, multiple `--config-locations`, `--exec` at different stages, etc
|
* **Improvements**: Regex and other operators in `--format`/`--match-filter`, multiple `--postprocessor-args` and `--downloader-args`, faster archive checking, more [format selection options](#format-selection), merge multi-video/audio, multiple `--config-locations`, `--exec` at different stages, etc
|
||||||
|
|
||||||
* **Plugins**: Extractors and PostProcessors can be loaded from an external file. See [plugins](#plugins) for details
|
* **Plugins**: Extractors and PostProcessors can be loaded from an external file. See [plugins](#plugins) for details
|
||||||
|
|
||||||
* **Self-updater**: The releases can be updated using `yt-dlp -U`
|
* **Self updater**: The releases can be updated using `yt-dlp -U`, and downgraded using `--update-to` if required
|
||||||
|
|
||||||
|
* **Nightly builds**: [Automated nightly builds](#update-channels) can be used with `--update-to nightly`
|
||||||
|
|
||||||
See [changelog](Changelog.md) or [commits](https://github.com/yt-dlp/yt-dlp/commits) for the full list of changes
|
See [changelog](Changelog.md) or [commits](https://github.com/yt-dlp/yt-dlp/commits) for the full list of changes
|
||||||
|
|
||||||
@ -130,6 +132,7 @@ ### Differences in default behavior
|
|||||||
|
|
||||||
Some of yt-dlp's default options are different from that of youtube-dl and youtube-dlc:
|
Some of yt-dlp's default options are different from that of youtube-dl and youtube-dlc:
|
||||||
|
|
||||||
|
* yt-dlp supports only [Python 3.7+](## "Windows 7"), and *may* remove support for more versions as they [become EOL](https://devguide.python.org/versions/#python-release-cycle); while [youtube-dl still supports Python 2.6+ and 3.2+](https://github.com/ytdl-org/youtube-dl/issues/30568#issue-1118238743)
|
||||||
* The options `--auto-number` (`-A`), `--title` (`-t`) and `--literal` (`-l`), no longer work. See [removed options](#Removed) for details
|
* The options `--auto-number` (`-A`), `--title` (`-t`) and `--literal` (`-l`), no longer work. See [removed options](#Removed) for details
|
||||||
* `avconv` is not supported as an alternative to `ffmpeg`
|
* `avconv` is not supported as an alternative to `ffmpeg`
|
||||||
* yt-dlp stores config files in slightly different locations to youtube-dl. See [CONFIGURATION](#configuration) for a list of correct locations
|
* yt-dlp stores config files in slightly different locations to youtube-dl. See [CONFIGURATION](#configuration) for a list of correct locations
|
||||||
@ -149,19 +152,20 @@ ### Differences in default behavior
|
|||||||
* The upload dates extracted from YouTube are in UTC [when available](https://github.com/yt-dlp/yt-dlp/blob/89e4d86171c7b7c997c77d4714542e0383bf0db0/yt_dlp/extractor/youtube.py#L3898-L3900). Use `--compat-options no-youtube-prefer-utc-upload-date` to prefer the non-UTC upload date.
|
* The upload dates extracted from YouTube are in UTC [when available](https://github.com/yt-dlp/yt-dlp/blob/89e4d86171c7b7c997c77d4714542e0383bf0db0/yt_dlp/extractor/youtube.py#L3898-L3900). Use `--compat-options no-youtube-prefer-utc-upload-date` to prefer the non-UTC upload date.
|
||||||
* If `ffmpeg` is used as the downloader, the downloading and merging of formats happen in a single step when possible. Use `--compat-options no-direct-merge` to revert this
|
* If `ffmpeg` is used as the downloader, the downloading and merging of formats happen in a single step when possible. Use `--compat-options no-direct-merge` to revert this
|
||||||
* Thumbnail embedding in `mp4` is done with mutagen if possible. Use `--compat-options embed-thumbnail-atomicparsley` to force the use of AtomicParsley instead
|
* Thumbnail embedding in `mp4` is done with mutagen if possible. Use `--compat-options embed-thumbnail-atomicparsley` to force the use of AtomicParsley instead
|
||||||
* Some private fields such as filenames are removed by default from the infojson. Use `--no-clean-infojson` or `--compat-options no-clean-infojson` to revert this
|
* Some internal metadata such as filenames are removed by default from the infojson. Use `--no-clean-infojson` or `--compat-options no-clean-infojson` to revert this
|
||||||
* When `--embed-subs` and `--write-subs` are used together, the subtitles are written to disk and also embedded in the media file. You can use just `--embed-subs` to embed the subs and automatically delete the separate file. See [#630 (comment)](https://github.com/yt-dlp/yt-dlp/issues/630#issuecomment-893659460) for more info. `--compat-options no-keep-subs` can be used to revert this
|
* When `--embed-subs` and `--write-subs` are used together, the subtitles are written to disk and also embedded in the media file. You can use just `--embed-subs` to embed the subs and automatically delete the separate file. See [#630 (comment)](https://github.com/yt-dlp/yt-dlp/issues/630#issuecomment-893659460) for more info. `--compat-options no-keep-subs` can be used to revert this
|
||||||
* `certifi` will be used for SSL root certificates, if installed. If you want to use system certificates (e.g. self-signed), use `--compat-options no-certifi`
|
* `certifi` will be used for SSL root certificates, if installed. If you want to use system certificates (e.g. self-signed), use `--compat-options no-certifi`
|
||||||
* yt-dlp's sanitization of invalid characters in filenames is different/smarter than in youtube-dl. You can use `--compat-options filename-sanitization` to revert to youtube-dl's behavior
|
* yt-dlp's sanitization of invalid characters in filenames is different/smarter than in youtube-dl. You can use `--compat-options filename-sanitization` to revert to youtube-dl's behavior
|
||||||
* yt-dlp tries to parse the external downloader outputs into the standard progress output if possible (Currently implemented: [~~aria2c~~](https://github.com/yt-dlp/yt-dlp/issues/5931)). You can use `--compat-options no-external-downloader-progress` to get the downloader output as-is
|
* yt-dlp tries to parse the external downloader outputs into the standard progress output if possible (Currently implemented: [~~aria2c~~](https://github.com/yt-dlp/yt-dlp/issues/5931)). You can use `--compat-options no-external-downloader-progress` to get the downloader output as-is
|
||||||
|
* yt-dlp versions between 2021.09.01 and 2023.01.02 applies `--match-filter` to nested playlists. This was an unintentional side-effect of [8f18ac](https://github.com/yt-dlp/yt-dlp/commit/8f18aca8717bb0dd49054555af8d386e5eda3a88) and is fixed in [d7b460](https://github.com/yt-dlp/yt-dlp/commit/d7b460d0e5fc710950582baed2e3fc616ed98a80). Use `--compat-options playlist-match-filter` to revert this
|
||||||
|
|
||||||
For ease of use, a few more compat options are available:
|
For ease of use, a few more compat options are available:
|
||||||
|
|
||||||
* `--compat-options all`: Use all compat options (Do NOT use)
|
* `--compat-options all`: Use all compat options (Do NOT use)
|
||||||
* `--compat-options youtube-dl`: Same as `--compat-options all,-multistreams`
|
* `--compat-options youtube-dl`: Same as `--compat-options all,-multistreams,-playlist-match-filter`
|
||||||
* `--compat-options youtube-dlc`: Same as `--compat-options all,-no-live-chat,-no-youtube-channel-redirect`
|
* `--compat-options youtube-dlc`: Same as `--compat-options all,-no-live-chat,-no-youtube-channel-redirect,-playlist-match-filter`
|
||||||
* `--compat-options 2021`: Same as `--compat-options 2022,no-certifi,filename-sanitization,no-youtube-prefer-utc-upload-date`
|
* `--compat-options 2021`: Same as `--compat-options 2022,no-certifi,filename-sanitization,no-youtube-prefer-utc-upload-date`
|
||||||
* `--compat-options 2022`: Same as `--compat-options no-external-downloader-progress`. Use this to enable all future compat options
|
* `--compat-options 2022`: Same as `--compat-options playlist-match-filter,no-external-downloader-progress`. Use this to enable all future compat options
|
||||||
|
|
||||||
|
|
||||||
# INSTALLATION
|
# INSTALLATION
|
||||||
@ -176,16 +180,32 @@ # INSTALLATION
|
|||||||
[![All versions](https://img.shields.io/badge/-All_Versions-lightgrey.svg?style=for-the-badge)](https://github.com/yt-dlp/yt-dlp/releases)
|
[![All versions](https://img.shields.io/badge/-All_Versions-lightgrey.svg?style=for-the-badge)](https://github.com/yt-dlp/yt-dlp/releases)
|
||||||
<!-- MANPAGE: END EXCLUDED SECTION -->
|
<!-- MANPAGE: END EXCLUDED SECTION -->
|
||||||
|
|
||||||
You can install yt-dlp using [the binaries](#release-files), [PIP](https://pypi.org/project/yt-dlp) or one using a third-party package manager. See [the wiki](https://github.com/yt-dlp/yt-dlp/wiki/Installation) for detailed instructions
|
You can install yt-dlp using [the binaries](#release-files), [pip](https://pypi.org/project/yt-dlp) or one using a third-party package manager. See [the wiki](https://github.com/yt-dlp/yt-dlp/wiki/Installation) for detailed instructions
|
||||||
|
|
||||||
|
|
||||||
## UPDATE
|
## UPDATE
|
||||||
You can use `yt-dlp -U` to update if you are [using the release binaries](#release-files)
|
You can use `yt-dlp -U` to update if you are using the [release binaries](#release-files)
|
||||||
|
|
||||||
If you [installed with PIP](https://github.com/yt-dlp/yt-dlp/wiki/Installation#with-pip), simply re-run the same command that was used to install the program
|
If you [installed with pip](https://github.com/yt-dlp/yt-dlp/wiki/Installation#with-pip), simply re-run the same command that was used to install the program
|
||||||
|
|
||||||
For other third-party package managers, see [the wiki](https://github.com/yt-dlp/yt-dlp/wiki/Installation#third-party-package-managers) or refer their documentation
|
For other third-party package managers, see [the wiki](https://github.com/yt-dlp/yt-dlp/wiki/Installation#third-party-package-managers) or refer their documentation
|
||||||
|
|
||||||
|
<a id="update-channels"/>
|
||||||
|
|
||||||
|
There are currently two release channels for binaries, `stable` and `nightly`.
|
||||||
|
`stable` is the default channel, and many of its changes have been tested by users of the nightly channel.
|
||||||
|
The `nightly` channel has releases built after each push to the master branch, and will have the most recent fixes and additions, but also have more risk of regressions. They are available in [their own repo](https://github.com/yt-dlp/yt-dlp-nightly-builds/releases).
|
||||||
|
|
||||||
|
When using `--update`/`-U`, a release binary will only update to its current channel.
|
||||||
|
`--update-to CHANNEL` can be used to switch to a different channel when a newer version is available. `--update-to [CHANNEL@]TAG` can also be used to upgrade or downgrade to specific tags from a channel.
|
||||||
|
|
||||||
|
You may also use `--update-to <repository>` (`<owner>/<repository>`) to update to a channel on a completely different repository. Be careful with what repository you are updating to though, there is no verification done for binaries from different repositories.
|
||||||
|
|
||||||
|
Example usage:
|
||||||
|
* `yt-dlp --update-to nightly` change to `nightly` channel and update to its latest release
|
||||||
|
* `yt-dlp --update-to stable@2023.02.17` upgrade/downgrade to release to `stable` channel tag `2023.02.17`
|
||||||
|
* `yt-dlp --update-to 2023.01.06` upgrade/downgrade to tag `2023.01.06` if it exists on the current channel
|
||||||
|
* `yt-dlp --update-to example/yt-dlp@2023.03.01` upgrade/downgrade to the release from the `example/yt-dlp` repository, tag `2023.03.01`
|
||||||
|
|
||||||
<!-- MANPAGE: BEGIN EXCLUDED SECTION -->
|
<!-- MANPAGE: BEGIN EXCLUDED SECTION -->
|
||||||
## RELEASE FILES
|
## RELEASE FILES
|
||||||
@ -218,11 +238,20 @@ #### Misc
|
|||||||
:---|:---
|
:---|:---
|
||||||
[yt-dlp.tar.gz](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp.tar.gz)|Source tarball
|
[yt-dlp.tar.gz](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp.tar.gz)|Source tarball
|
||||||
[SHA2-512SUMS](https://github.com/yt-dlp/yt-dlp/releases/latest/download/SHA2-512SUMS)|GNU-style SHA512 sums
|
[SHA2-512SUMS](https://github.com/yt-dlp/yt-dlp/releases/latest/download/SHA2-512SUMS)|GNU-style SHA512 sums
|
||||||
|
[SHA2-512SUMS.sig](https://github.com/yt-dlp/yt-dlp/releases/latest/download/SHA2-512SUMS.sig)|GPG signature file for SHA512 sums
|
||||||
[SHA2-256SUMS](https://github.com/yt-dlp/yt-dlp/releases/latest/download/SHA2-256SUMS)|GNU-style SHA256 sums
|
[SHA2-256SUMS](https://github.com/yt-dlp/yt-dlp/releases/latest/download/SHA2-256SUMS)|GNU-style SHA256 sums
|
||||||
|
[SHA2-256SUMS.sig](https://github.com/yt-dlp/yt-dlp/releases/latest/download/SHA2-256SUMS.sig)|GPG signature file for SHA256 sums
|
||||||
|
|
||||||
|
The public key that can be used to verify the GPG signatures is [available here](https://github.com/yt-dlp/yt-dlp/blob/master/public.key)
|
||||||
|
Example usage:
|
||||||
|
```
|
||||||
|
curl -L https://github.com/yt-dlp/yt-dlp/raw/master/public.key | gpg --import
|
||||||
|
gpg --verify SHA2-256SUMS.sig SHA2-256SUMS
|
||||||
|
gpg --verify SHA2-512SUMS.sig SHA2-512SUMS
|
||||||
|
```
|
||||||
<!-- MANPAGE: END EXCLUDED SECTION -->
|
<!-- MANPAGE: END EXCLUDED SECTION -->
|
||||||
|
|
||||||
|
**Note**: The manpages, shell completion (autocomplete) files etc. are available inside the [source tarball](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp.tar.gz)
|
||||||
**Note**: The manpages, shell completion files etc. are available in the [source tarball](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp.tar.gz)
|
|
||||||
|
|
||||||
## DEPENDENCIES
|
## DEPENDENCIES
|
||||||
Python versions 3.7+ (CPython and PyPy) are supported. Other versions and implementations may or may not work correctly.
|
Python versions 3.7+ (CPython and PyPy) are supported. Other versions and implementations may or may not work correctly.
|
||||||
@ -310,11 +339,15 @@ ### Standalone Py2Exe Builds (Windows)
|
|||||||
|
|
||||||
### Related scripts
|
### Related scripts
|
||||||
|
|
||||||
* **`devscripts/update-version.py [revision]`** - Update the version number based on current date
|
* **`devscripts/update-version.py`** - Update the version number based on current date.
|
||||||
* **`devscripts/set-variant.py variant [-M update_message]`** - Set the build variant of the executable
|
* **`devscripts/set-variant.py`** - Set the build variant of the executable.
|
||||||
|
* **`devscripts/make_changelog.py`** - Create a markdown changelog using short commit messages and update `CONTRIBUTORS` file.
|
||||||
* **`devscripts/make_lazy_extractors.py`** - Create lazy extractors. Running this before building the binaries (any variant) will improve their startup performance. Set the environment variable `YTDLP_NO_LAZY_EXTRACTORS=1` if you wish to forcefully disable lazy extractor loading.
|
* **`devscripts/make_lazy_extractors.py`** - Create lazy extractors. Running this before building the binaries (any variant) will improve their startup performance. Set the environment variable `YTDLP_NO_LAZY_EXTRACTORS=1` if you wish to forcefully disable lazy extractor loading.
|
||||||
|
|
||||||
You can also fork the project on GitHub and run your fork's [build workflow](.github/workflows/build.yml) to automatically build a full release
|
Note: See their `--help` for more info.
|
||||||
|
|
||||||
|
### Forking the project
|
||||||
|
If you fork the project on GitHub, you can run your fork's [build workflow](.github/workflows/build.yml) to automatically build the selected version(s) as artifacts. Alternatively, you can run the [release workflow](.github/workflows/release.yml) or enable the [nightly workflow](.github/workflows/release-nightly.yml) to create full (pre-)releases.
|
||||||
|
|
||||||
# USAGE AND OPTIONS
|
# USAGE AND OPTIONS
|
||||||
|
|
||||||
@ -330,6 +363,11 @@ ## General Options:
|
|||||||
--version Print program version and exit
|
--version Print program version and exit
|
||||||
-U, --update Update this program to the latest version
|
-U, --update Update this program to the latest version
|
||||||
--no-update Do not check for updates (default)
|
--no-update Do not check for updates (default)
|
||||||
|
--update-to [CHANNEL]@[TAG] Upgrade/downgrade to a specific version.
|
||||||
|
CHANNEL can be a repository as well. CHANNEL
|
||||||
|
and TAG default to "stable" and "latest"
|
||||||
|
respectively if omitted; See "UPDATE" for
|
||||||
|
details. Supported channels: stable, nightly
|
||||||
-i, --ignore-errors Ignore download and postprocessing errors.
|
-i, --ignore-errors Ignore download and postprocessing errors.
|
||||||
The download will be considered successful
|
The download will be considered successful
|
||||||
even if the postprocessing fails
|
even if the postprocessing fails
|
||||||
@ -375,7 +413,8 @@ ## General Options:
|
|||||||
configuration files
|
configuration files
|
||||||
--flat-playlist Do not extract the videos of a playlist,
|
--flat-playlist Do not extract the videos of a playlist,
|
||||||
only list them
|
only list them
|
||||||
--no-flat-playlist Extract the videos of a playlist
|
--no-flat-playlist Fully extract the videos of a playlist
|
||||||
|
(default)
|
||||||
--live-from-start Download livestreams from the start.
|
--live-from-start Download livestreams from the start.
|
||||||
Currently only supported for YouTube
|
Currently only supported for YouTube
|
||||||
(Experimental)
|
(Experimental)
|
||||||
@ -387,8 +426,12 @@ ## General Options:
|
|||||||
--no-wait-for-video Do not wait for scheduled streams (default)
|
--no-wait-for-video Do not wait for scheduled streams (default)
|
||||||
--mark-watched Mark videos watched (even with --simulate)
|
--mark-watched Mark videos watched (even with --simulate)
|
||||||
--no-mark-watched Do not mark videos watched (default)
|
--no-mark-watched Do not mark videos watched (default)
|
||||||
--no-colors Do not emit color codes in output (Alias:
|
--color [STREAM:]POLICY Whether to emit color codes in output,
|
||||||
--no-colours)
|
optionally prefixed by the STREAM (stdout or
|
||||||
|
stderr) to apply the setting to. Can be one
|
||||||
|
of "always", "auto" (default), "never", or
|
||||||
|
"no_color" (use non color terminal
|
||||||
|
sequences). Can be used multiple times
|
||||||
--compat-options OPTS Options that can help keep compatibility
|
--compat-options OPTS Options that can help keep compatibility
|
||||||
with youtube-dl or youtube-dlc
|
with youtube-dl or youtube-dlc
|
||||||
configurations by reverting some of the
|
configurations by reverting some of the
|
||||||
@ -429,15 +472,11 @@ ## Geo-restriction:
|
|||||||
specified by --proxy (or none, if the option
|
specified by --proxy (or none, if the option
|
||||||
is not present) is used for the actual
|
is not present) is used for the actual
|
||||||
downloading
|
downloading
|
||||||
--geo-bypass Bypass geographic restriction via faking
|
--xff VALUE How to fake X-Forwarded-For HTTP header to
|
||||||
X-Forwarded-For HTTP header (default)
|
try bypassing geographic restriction. One of
|
||||||
--no-geo-bypass Do not bypass geographic restriction via
|
"default" (only when known to be useful),
|
||||||
faking X-Forwarded-For HTTP header
|
"never", an IP block in CIDR notation, or a
|
||||||
--geo-bypass-country CODE Force bypass geographic restriction with
|
two-letter ISO 3166-2 country code
|
||||||
explicitly provided two-letter ISO 3166-2
|
|
||||||
country code
|
|
||||||
--geo-bypass-ip-block IP_BLOCK Force bypass geographic restriction with
|
|
||||||
explicitly provided IP block in CIDR notation
|
|
||||||
|
|
||||||
## Video Selection:
|
## Video Selection:
|
||||||
-I, --playlist-items ITEM_SPEC Comma separated playlist_index of the items
|
-I, --playlist-items ITEM_SPEC Comma separated playlist_index of the items
|
||||||
@ -456,9 +495,8 @@ ## Video Selection:
|
|||||||
--date DATE Download only videos uploaded on this date.
|
--date DATE Download only videos uploaded on this date.
|
||||||
The date can be "YYYYMMDD" or in the format
|
The date can be "YYYYMMDD" or in the format
|
||||||
[now|today|yesterday][-N[day|week|month|year]].
|
[now|today|yesterday][-N[day|week|month|year]].
|
||||||
E.g. "--date today-2weeks" downloads
|
E.g. "--date today-2weeks" downloads only
|
||||||
only videos uploaded on the same day two
|
videos uploaded on the same day two weeks ago
|
||||||
weeks ago
|
|
||||||
--datebefore DATE Download only videos uploaded on or before
|
--datebefore DATE Download only videos uploaded on or before
|
||||||
this date. The date formats accepted is the
|
this date. The date formats accepted is the
|
||||||
same as --date
|
same as --date
|
||||||
@ -485,7 +523,10 @@ ## Video Selection:
|
|||||||
dogs" (caseless). Use "--match-filter -" to
|
dogs" (caseless). Use "--match-filter -" to
|
||||||
interactively ask whether to download each
|
interactively ask whether to download each
|
||||||
video
|
video
|
||||||
--no-match-filter Do not use generic video filter (default)
|
--no-match-filters Do not use any --match-filter (default)
|
||||||
|
--break-match-filters FILTER Same as "--match-filters" but stops the
|
||||||
|
download process when a video is rejected
|
||||||
|
--no-break-match-filters Do not use any --break-match-filters (default)
|
||||||
--no-playlist Download only the video, if the URL refers
|
--no-playlist Download only the video, if the URL refers
|
||||||
to a video and a playlist
|
to a video and a playlist
|
||||||
--yes-playlist Download the playlist, if the URL refers to
|
--yes-playlist Download the playlist, if the URL refers to
|
||||||
@ -499,11 +540,9 @@ ## Video Selection:
|
|||||||
--max-downloads NUMBER Abort after downloading NUMBER files
|
--max-downloads NUMBER Abort after downloading NUMBER files
|
||||||
--break-on-existing Stop the download process when encountering
|
--break-on-existing Stop the download process when encountering
|
||||||
a file that is in the archive
|
a file that is in the archive
|
||||||
--break-on-reject Stop the download process when encountering
|
|
||||||
a file that has been filtered out
|
|
||||||
--break-per-input Alters --max-downloads, --break-on-existing,
|
--break-per-input Alters --max-downloads, --break-on-existing,
|
||||||
--break-on-reject, and autonumber to reset
|
--break-match-filter, and autonumber to
|
||||||
per input URL
|
reset per input URL
|
||||||
--no-break-per-input --break-on-existing and similar options
|
--no-break-per-input --break-on-existing and similar options
|
||||||
terminates the entire download queue
|
terminates the entire download queue
|
||||||
--skip-playlist-after-errors N Number of allowed failures until the rest of
|
--skip-playlist-after-errors N Number of allowed failures until the rest of
|
||||||
@ -571,12 +610,14 @@ ## Download Options:
|
|||||||
--no-hls-use-mpegts Do not use the mpegts container for HLS
|
--no-hls-use-mpegts Do not use the mpegts container for HLS
|
||||||
videos. This is default when not downloading
|
videos. This is default when not downloading
|
||||||
live streams
|
live streams
|
||||||
--download-sections REGEX Download only chapters whose title matches
|
--download-sections REGEX Download only chapters that match the
|
||||||
the given regular expression. Time ranges
|
regular expression. A "*" prefix denotes
|
||||||
prefixed by a "*" can also be used in place
|
time-range instead of chapter. Negative
|
||||||
of chapters to download the specified range.
|
timestamps are calculated from the end.
|
||||||
Needs ffmpeg. This option can be used
|
"*from-url" can be used to download between
|
||||||
multiple times to download multiple
|
the "start_time" and "end_time" extracted
|
||||||
|
from the URL. Needs ffmpeg. This option can
|
||||||
|
be used multiple times to download multiple
|
||||||
sections, e.g. --download-sections
|
sections, e.g. --download-sections
|
||||||
"*10:15-inf" --download-sections "intro"
|
"*10:15-inf" --download-sections "intro"
|
||||||
--downloader [PROTO:]NAME Name or path of the external downloader to
|
--downloader [PROTO:]NAME Name or path of the external downloader to
|
||||||
@ -660,9 +701,8 @@ ## Filesystem Options:
|
|||||||
--write-description etc. (default)
|
--write-description etc. (default)
|
||||||
--no-write-playlist-metafiles Do not write playlist metadata when using
|
--no-write-playlist-metafiles Do not write playlist metadata when using
|
||||||
--write-info-json, --write-description etc.
|
--write-info-json, --write-description etc.
|
||||||
--clean-info-json Remove some private fields such as filenames
|
--clean-info-json Remove some internal metadata such as
|
||||||
from the infojson. Note that it could still
|
filenames from the infojson (default)
|
||||||
contain some personal information (default)
|
|
||||||
--no-clean-info-json Write all fields to the infojson
|
--no-clean-info-json Write all fields to the infojson
|
||||||
--write-comments Retrieve video comments to be placed in the
|
--write-comments Retrieve video comments to be placed in the
|
||||||
infojson. The comments are fetched even
|
infojson. The comments are fetched even
|
||||||
@ -690,7 +730,7 @@ ## Filesystem Options:
|
|||||||
By default, all containers of the most
|
By default, all containers of the most
|
||||||
recently accessed profile are used.
|
recently accessed profile are used.
|
||||||
Currently supported keyrings are: basictext,
|
Currently supported keyrings are: basictext,
|
||||||
gnomekeyring, kwallet
|
gnomekeyring, kwallet, kwallet5, kwallet6
|
||||||
--no-cookies-from-browser Do not load cookies from browser (default)
|
--no-cookies-from-browser Do not load cookies from browser (default)
|
||||||
--cache-dir DIR Location in the filesystem where yt-dlp can
|
--cache-dir DIR Location in the filesystem where yt-dlp can
|
||||||
store some downloaded information (such as
|
store some downloaded information (such as
|
||||||
@ -718,6 +758,7 @@ ## Internet Shortcut Options:
|
|||||||
## Verbosity and Simulation Options:
|
## Verbosity and Simulation Options:
|
||||||
-q, --quiet Activate quiet mode. If used with --verbose,
|
-q, --quiet Activate quiet mode. If used with --verbose,
|
||||||
print the log to stderr
|
print the log to stderr
|
||||||
|
--no-quiet Deactivate quiet mode. (Default)
|
||||||
--no-warnings Ignore warnings
|
--no-warnings Ignore warnings
|
||||||
-s, --simulate Do not download the video and do not write
|
-s, --simulate Do not download the video and do not write
|
||||||
anything to disk
|
anything to disk
|
||||||
@ -788,7 +829,7 @@ ## Workarounds:
|
|||||||
--prefer-insecure Use an unencrypted connection to retrieve
|
--prefer-insecure Use an unencrypted connection to retrieve
|
||||||
information about the video (Currently
|
information about the video (Currently
|
||||||
supported only for YouTube)
|
supported only for YouTube)
|
||||||
--add-header FIELD:VALUE Specify a custom HTTP header and its value,
|
--add-headers FIELD:VALUE Specify a custom HTTP header and its value,
|
||||||
separated by a colon ":". You can use this
|
separated by a colon ":". You can use this
|
||||||
option multiple times
|
option multiple times
|
||||||
--bidi-workaround Work around terminals that lack
|
--bidi-workaround Work around terminals that lack
|
||||||
@ -870,6 +911,8 @@ ## Authentication Options:
|
|||||||
--netrc-location PATH Location of .netrc authentication data;
|
--netrc-location PATH Location of .netrc authentication data;
|
||||||
either the path or its containing directory.
|
either the path or its containing directory.
|
||||||
Defaults to ~/.netrc
|
Defaults to ~/.netrc
|
||||||
|
--netrc-cmd NETRC_CMD Command to execute to get the credentials
|
||||||
|
for an extractor.
|
||||||
--video-password PASSWORD Video password (vimeo, youku)
|
--video-password PASSWORD Video password (vimeo, youku)
|
||||||
--ap-mso MSO Adobe Pass multiple-system operator (TV
|
--ap-mso MSO Adobe Pass multiple-system operator (TV
|
||||||
provider) identifier, use --ap-list-mso for
|
provider) identifier, use --ap-list-mso for
|
||||||
@ -999,13 +1042,10 @@ ## Post-Processing Options:
|
|||||||
that of --use-postprocessor (default:
|
that of --use-postprocessor (default:
|
||||||
after_move). Same syntax as the output
|
after_move). Same syntax as the output
|
||||||
template can be used to pass any field as
|
template can be used to pass any field as
|
||||||
arguments to the command. After download, an
|
arguments to the command. If no fields are
|
||||||
additional field "filepath" that contains
|
passed, %(filepath,_filename|)q is appended
|
||||||
the final path of the downloaded file is
|
to the end of the command. This option can
|
||||||
also available, and if no fields are passed,
|
be used multiple times
|
||||||
%(filepath,_filename|)q is appended to the
|
|
||||||
end of the command. This option can be used
|
|
||||||
multiple times
|
|
||||||
--no-exec Remove any previously defined --exec
|
--no-exec Remove any previously defined --exec
|
||||||
--convert-subs FORMAT Convert the subtitles to another format
|
--convert-subs FORMAT Convert the subtitles to another format
|
||||||
(currently supported: ass, lrc, srt, vtt)
|
(currently supported: ass, lrc, srt, vtt)
|
||||||
@ -1163,7 +1203,7 @@ ### Configuration file encoding
|
|||||||
|
|
||||||
If you want your file to be decoded differently, add `# coding: ENCODING` to the beginning of the file (e.g. `# coding: shift-jis`). There must be no characters before that, even spaces or BOM.
|
If you want your file to be decoded differently, add `# coding: ENCODING` to the beginning of the file (e.g. `# coding: shift-jis`). There must be no characters before that, even spaces or BOM.
|
||||||
|
|
||||||
### Authentication with `.netrc` file
|
### Authentication with netrc
|
||||||
|
|
||||||
You may also want to configure automatic credentials storage for extractors that support authentication (by providing login and password with `--username` and `--password`) in order not to pass credentials as command line arguments on every yt-dlp execution and prevent tracking plain text passwords in the shell command history. You can achieve this using a [`.netrc` file](https://stackoverflow.com/tags/.netrc/info) on a per-extractor basis. For that you will need to create a `.netrc` file in `--netrc-location` and restrict permissions to read/write by only you:
|
You may also want to configure automatic credentials storage for extractors that support authentication (by providing login and password with `--username` and `--password`) in order not to pass credentials as command line arguments on every yt-dlp execution and prevent tracking plain text passwords in the shell command history. You can achieve this using a [`.netrc` file](https://stackoverflow.com/tags/.netrc/info) on a per-extractor basis. For that you will need to create a `.netrc` file in `--netrc-location` and restrict permissions to read/write by only you:
|
||||||
```
|
```
|
||||||
@ -1183,6 +1223,14 @@ ### Authentication with `.netrc` file
|
|||||||
|
|
||||||
The default location of the .netrc file is `~` (see below).
|
The default location of the .netrc file is `~` (see below).
|
||||||
|
|
||||||
|
As an alternative to using the `.netrc` file, which has the disadvantage of keeping your passwords in a plain text file, you can configure a custom shell command to provide the credentials for an extractor. This is done by providing the `--netrc-cmd` parameter, it shall output the credentials in the netrc format and return `0` on success, other values will be treated as an error. `{}` in the command will be replaced by the name of the extractor to make it possible to select the credentials for the right extractor.
|
||||||
|
|
||||||
|
E.g. To use an encrypted `.netrc` file stored as `.authinfo.gpg`
|
||||||
|
```
|
||||||
|
yt-dlp --netrc-cmd 'gpg --decrypt ~/.authinfo.gpg' https://www.youtube.com/watch?v=BaW_jenozKc
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
### Notes about environment variables
|
### Notes about environment variables
|
||||||
* Environment variables are normally specified as `${VARIABLE}`/`$VARIABLE` on UNIX and `%VARIABLE%` on Windows; but is always shown as `${VARIABLE}` in this documentation
|
* Environment variables are normally specified as `${VARIABLE}`/`$VARIABLE` on UNIX and `%VARIABLE%` on Windows; but is always shown as `${VARIABLE}` in this documentation
|
||||||
* yt-dlp also allow using UNIX-style variables on Windows for path-like options; e.g. `--output`, `--config-location`
|
* yt-dlp also allow using UNIX-style variables on Windows for path-like options; e.g. `--output`, `--config-location`
|
||||||
@ -1212,7 +1260,7 @@ # OUTPUT TEMPLATE
|
|||||||
|
|
||||||
1. **Alternatives**: Alternate fields can be specified separated with a `,`. E.g. `%(release_date>%Y,upload_date>%Y|Unknown)s`
|
1. **Alternatives**: Alternate fields can be specified separated with a `,`. E.g. `%(release_date>%Y,upload_date>%Y|Unknown)s`
|
||||||
|
|
||||||
1. **Replacement**: A replacement value can be specified using a `&` separator. If the field is *not* empty, this replacement value will be used instead of the actual field content. This is done after alternate fields are considered; thus the replacement is used if *any* of the alternative fields is *not* empty.
|
1. **Replacement**: A replacement value can be specified using a `&` separator according to the [`str.format` mini-language](https://docs.python.org/3/library/string.html#format-specification-mini-language). If the field is *not* empty, this replacement value will be used instead of the actual field content. This is done after alternate fields are considered; thus the replacement is used if *any* of the alternative fields is *not* empty. E.g. `%(chapters&has chapters|no chapters)s`, `%(title&TITLE={:>20}|NO TITLE)s`
|
||||||
|
|
||||||
1. **Default**: A literal default value can be specified for when the field is empty using a `|` separator. This overrides `--output-na-placeholder`. E.g. `%(uploader|Unknown)s`
|
1. **Default**: A literal default value can be specified for when the field is empty using a `|` separator. This overrides `--output-na-placeholder`. E.g. `%(uploader|Unknown)s`
|
||||||
|
|
||||||
@ -1227,7 +1275,7 @@ # OUTPUT TEMPLATE
|
|||||||
|
|
||||||
Additionally, you can set different output templates for the various metadata files separately from the general output template by specifying the type of file followed by the template separated by a colon `:`. The different file types supported are `subtitle`, `thumbnail`, `description`, `annotation` (deprecated), `infojson`, `link`, `pl_thumbnail`, `pl_description`, `pl_infojson`, `chapter`, `pl_video`. E.g. `-o "%(title)s.%(ext)s" -o "thumbnail:%(title)s\%(title)s.%(ext)s"` will put the thumbnails in a folder with the same name as the video. If any of the templates is empty, that type of file will not be written. E.g. `--write-thumbnail -o "thumbnail:"` will write thumbnails only for playlists and not for video.
|
Additionally, you can set different output templates for the various metadata files separately from the general output template by specifying the type of file followed by the template separated by a colon `:`. The different file types supported are `subtitle`, `thumbnail`, `description`, `annotation` (deprecated), `infojson`, `link`, `pl_thumbnail`, `pl_description`, `pl_infojson`, `chapter`, `pl_video`. E.g. `-o "%(title)s.%(ext)s" -o "thumbnail:%(title)s\%(title)s.%(ext)s"` will put the thumbnails in a folder with the same name as the video. If any of the templates is empty, that type of file will not be written. E.g. `--write-thumbnail -o "thumbnail:"` will write thumbnails only for playlists and not for video.
|
||||||
|
|
||||||
<a id="outtmpl-postprocess-note"></a>
|
<a id="outtmpl-postprocess-note"/>
|
||||||
|
|
||||||
**Note**: Due to post-processing (i.e. merging etc.), the actual output filename might differ. Use `--print after_move:filepath` to get the name after all post-processing is complete.
|
**Note**: Due to post-processing (i.e. merging etc.), the actual output filename might differ. Use `--print after_move:filepath` to get the name after all post-processing is complete.
|
||||||
|
|
||||||
@ -1253,6 +1301,7 @@ # OUTPUT TEMPLATE
|
|||||||
- `channel` (string): Full name of the channel the video is uploaded on
|
- `channel` (string): Full name of the channel the video is uploaded on
|
||||||
- `channel_id` (string): Id of the channel
|
- `channel_id` (string): Id of the channel
|
||||||
- `channel_follower_count` (numeric): Number of followers of the channel
|
- `channel_follower_count` (numeric): Number of followers of the channel
|
||||||
|
- `channel_is_verified` (boolean): Whether the channel is verified on the platform
|
||||||
- `location` (string): Physical location where the video was filmed
|
- `location` (string): Physical location where the video was filmed
|
||||||
- `duration` (numeric): Length of the video in seconds
|
- `duration` (numeric): Length of the video in seconds
|
||||||
- `duration_string` (string): Length of the video (HH:mm:ss)
|
- `duration_string` (string): Length of the video (HH:mm:ss)
|
||||||
@ -1337,7 +1386,10 @@ # OUTPUT TEMPLATE
|
|||||||
- `subtitles_table` (table): The subtitle format table as printed by `--list-subs`
|
- `subtitles_table` (table): The subtitle format table as printed by `--list-subs`
|
||||||
- `automatic_captions_table` (table): The automatic subtitle format table as printed by `--list-subs`
|
- `automatic_captions_table` (table): The automatic subtitle format table as printed by `--list-subs`
|
||||||
|
|
||||||
|
Available only after the video is downloaded (`post_process`/`after_move`):
|
||||||
|
|
||||||
|
- `filepath`: Actual path of downloaded video file
|
||||||
|
|
||||||
Available only in `--sponsorblock-chapter-title`:
|
Available only in `--sponsorblock-chapter-title`:
|
||||||
|
|
||||||
- `start_time` (numeric): Start time of the chapter in seconds
|
- `start_time` (numeric): Start time of the chapter in seconds
|
||||||
@ -1383,7 +1435,7 @@ # Download YouTube playlist videos in separate directories according to their up
|
|||||||
$ yt-dlp -o "%(upload_date>%Y)s/%(title)s.%(ext)s" "https://www.youtube.com/playlist?list=PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re"
|
$ yt-dlp -o "%(upload_date>%Y)s/%(title)s.%(ext)s" "https://www.youtube.com/playlist?list=PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re"
|
||||||
|
|
||||||
# Prefix playlist index with " - " separator, but only if it is available
|
# Prefix playlist index with " - " separator, but only if it is available
|
||||||
$ yt-dlp -o '%(playlist_index|)s%(playlist_index& - |)s%(title)s.%(ext)s' BaW_jenozKc "https://www.youtube.com/user/TheLinuxFoundation/playlists"
|
$ yt-dlp -o "%(playlist_index&{} - |)s%(title)s.%(ext)s" BaW_jenozKc "https://www.youtube.com/user/TheLinuxFoundation/playlists"
|
||||||
|
|
||||||
# Download all playlists of YouTube channel/user keeping each playlist in separate directory:
|
# Download all playlists of YouTube channel/user keeping each playlist in separate directory:
|
||||||
$ yt-dlp -o "%(uploader)s/%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s" "https://www.youtube.com/user/TheLinuxFoundation/playlists"
|
$ yt-dlp -o "%(uploader)s/%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s" "https://www.youtube.com/user/TheLinuxFoundation/playlists"
|
||||||
@ -1511,7 +1563,7 @@ ## Sorting Formats
|
|||||||
- `source`: The preference of the source
|
- `source`: The preference of the source
|
||||||
- `proto`: Protocol used for download (`https`/`ftps` > `http`/`ftp` > `m3u8_native`/`m3u8` > `http_dash_segments`> `websocket_frag` > `mms`/`rtsp` > `f4f`/`f4m`)
|
- `proto`: Protocol used for download (`https`/`ftps` > `http`/`ftp` > `m3u8_native`/`m3u8` > `http_dash_segments`> `websocket_frag` > `mms`/`rtsp` > `f4f`/`f4m`)
|
||||||
- `vcodec`: Video Codec (`av01` > `vp9.2` > `vp9` > `h265` > `h264` > `vp8` > `h263` > `theora` > other)
|
- `vcodec`: Video Codec (`av01` > `vp9.2` > `vp9` > `h265` > `h264` > `vp8` > `h263` > `theora` > other)
|
||||||
- `acodec`: Audio Codec (`flac`/`alac` > `wav`/`aiff` > `opus` > `vorbis` > `aac` > `mp4a` > `mp3` `ac4` > > `eac3` > `ac3` > `dts` > other)
|
- `acodec`: Audio Codec (`flac`/`alac` > `wav`/`aiff` > `opus` > `vorbis` > `aac` > `mp4a` > `mp3` > `ac4` > `eac3` > `ac3` > `dts` > other)
|
||||||
- `codec`: Equivalent to `vcodec,acodec`
|
- `codec`: Equivalent to `vcodec,acodec`
|
||||||
- `vext`: Video Extension (`mp4` > `mov` > `webm` > `flv` > other). If `--prefer-free-formats` is used, `webm` is preferred.
|
- `vext`: Video Extension (`mp4` > `mov` > `webm` > `flv` > other). If `--prefer-free-formats` is used, `webm` is preferred.
|
||||||
- `aext`: Audio Extension (`m4a` > `aac` > `mp3` > `ogg` > `opus` > `webm` > other). If `--prefer-free-formats` is used, the order changes to `ogg` > `opus` > `webm` > `mp3` > `m4a` > `aac`
|
- `aext`: Audio Extension (`m4a` > `aac` > `mp3` > `ogg` > `opus` > `webm` > other). If `--prefer-free-formats` is used, the order changes to `ogg` > `opus` > `webm` > `mp3` > `m4a` > `aac`
|
||||||
@ -1678,7 +1730,7 @@ # MODIFYING METADATA
|
|||||||
|
|
||||||
This option also has a few special uses:
|
This option also has a few special uses:
|
||||||
|
|
||||||
* You can download an additional URL based on the metadata of the currently downloaded video. To do this, set the field `additional_urls` to the URL that you want to download. E.g. `--parse-metadata "description:(?P<additional_urls>https?://www\.vimeo\.com/\d+)` will download the first vimeo video found in the description
|
* You can download an additional URL based on the metadata of the currently downloaded video. To do this, set the field `additional_urls` to the URL that you want to download. E.g. `--parse-metadata "description:(?P<additional_urls>https?://www\.vimeo\.com/\d+)"` will download the first vimeo video found in the description
|
||||||
|
|
||||||
* You can use this to change the metadata that is embedded in the media file. To do this, set the value of the corresponding field with a `meta_` prefix. For example, any value you set to `meta_description` field will be added to the `description` field in the file - you can use this to set a different "description" and "synopsis". To modify the metadata of individual streams, use the `meta<n>_` prefix (e.g. `meta1_language`). Any value set to the `meta_` field will overwrite all default values.
|
* You can use this to change the metadata that is embedded in the media file. To do this, set the value of the corresponding field with a `meta_` prefix. For example, any value you set to `meta_description` field will be added to the `description` field in the file - you can use this to set a different "description" and "synopsis". To modify the metadata of individual streams, use the `meta<n>_` prefix (e.g. `meta1_language`). Any value set to the `meta_` field will overwrite all default values.
|
||||||
|
|
||||||
@ -1730,7 +1782,7 @@ # Do not set any "synopsis" in the video metadata
|
|||||||
$ yt-dlp --parse-metadata ":(?P<meta_synopsis>)"
|
$ yt-dlp --parse-metadata ":(?P<meta_synopsis>)"
|
||||||
|
|
||||||
# Remove "formats" field from the infojson by setting it to an empty string
|
# Remove "formats" field from the infojson by setting it to an empty string
|
||||||
$ yt-dlp --parse-metadata ":(?P<formats>)" -j
|
$ yt-dlp --parse-metadata "video::(?P<formats>)" --write-info-json
|
||||||
|
|
||||||
# Replace all spaces and "_" in title and uploader with a `-`
|
# Replace all spaces and "_" in title and uploader with a `-`
|
||||||
$ yt-dlp --replace-in-metadata "title,uploader" "[ _]" "-"
|
$ yt-dlp --replace-in-metadata "title,uploader" "[ _]" "-"
|
||||||
@ -1741,16 +1793,19 @@ # EXTRACTOR ARGUMENTS
|
|||||||
|
|
||||||
Some extractors accept additional arguments which can be passed using `--extractor-args KEY:ARGS`. `ARGS` is a `;` (semicolon) separated string of `ARG=VAL1,VAL2`. E.g. `--extractor-args "youtube:player-client=android_embedded,web;include_live_dash" --extractor-args "funimation:version=uncut"`
|
Some extractors accept additional arguments which can be passed using `--extractor-args KEY:ARGS`. `ARGS` is a `;` (semicolon) separated string of `ARG=VAL1,VAL2`. E.g. `--extractor-args "youtube:player-client=android_embedded,web;include_live_dash" --extractor-args "funimation:version=uncut"`
|
||||||
|
|
||||||
|
Note: In CLI, `ARG` can use `-` instead of `_`; e.g. `youtube:player-client"` becomes `youtube:player_client"`
|
||||||
|
|
||||||
The following extractors use this feature:
|
The following extractors use this feature:
|
||||||
|
|
||||||
#### youtube
|
#### youtube
|
||||||
* `lang`: Prefer translated metadata (`title`, `description` etc) of this language code (case-sensitive). By default, the video primary language metadata is preferred, with a fallback to `en` translated. See [youtube.py](https://github.com/yt-dlp/yt-dlp/blob/c26f9b991a0681fd3ea548d535919cec1fbbd430/yt_dlp/extractor/youtube.py#L381-L390) for list of supported content language codes
|
* `lang`: Prefer translated metadata (`title`, `description` etc) of this language code (case-sensitive). By default, the video primary language metadata is preferred, with a fallback to `en` translated. See [youtube.py](https://github.com/yt-dlp/yt-dlp/blob/c26f9b991a0681fd3ea548d535919cec1fbbd430/yt_dlp/extractor/youtube.py#L381-L390) for list of supported content language codes
|
||||||
* `skip`: One or more of `hls`, `dash` or `translated_subs` to skip extraction of the m3u8 manifests, dash manifests and [auto-translated subtitles](https://github.com/yt-dlp/yt-dlp/issues/4090#issuecomment-1158102032) respectively
|
* `skip`: One or more of `hls`, `dash` or `translated_subs` to skip extraction of the m3u8 manifests, dash manifests and [auto-translated subtitles](https://github.com/yt-dlp/yt-dlp/issues/4090#issuecomment-1158102032) respectively
|
||||||
* `player_client`: Clients to extract video data from. The main clients are `web`, `android` and `ios` with variants `_music`, `_embedded`, `_embedscreen`, `_creator` (e.g. `web_embedded`); and `mweb` and `tv_embedded` (agegate bypass) with no variants. By default, `android,web` is used, but `tv_embedded` and `creator` variants are added as required for age-gated videos. Similarly, the music variants are added for `music.youtube.com` urls. You can use `all` to use all the clients, and `default` for the default clients.
|
* `player_client`: Clients to extract video data from. The main clients are `web`, `android` and `ios` with variants `_music`, `_embedded`, `_embedscreen`, `_creator` (e.g. `web_embedded`); and `mweb` and `tv_embedded` (agegate bypass) with no variants. By default, `ios,android,web` is used, but `tv_embedded` and `creator` variants are added as required for age-gated videos. Similarly, the music variants are added for `music.youtube.com` urls. You can use `all` to use all the clients, and `default` for the default clients.
|
||||||
* `player_skip`: Skip some network requests that are generally needed for robust extraction. One or more of `configs` (skip client configs), `webpage` (skip initial webpage), `js` (skip js player). While these options can help reduce the number of requests needed or avoid some rate-limiting, they could cause some issues. See [#860](https://github.com/yt-dlp/yt-dlp/pull/860) for more details
|
* `player_skip`: Skip some network requests that are generally needed for robust extraction. One or more of `configs` (skip client configs), `webpage` (skip initial webpage), `js` (skip js player). While these options can help reduce the number of requests needed or avoid some rate-limiting, they could cause some issues. See [#860](https://github.com/yt-dlp/yt-dlp/pull/860) for more details
|
||||||
* `comment_sort`: `top` or `new` (default) - choose comment sorting mode (on YouTube's side)
|
* `comment_sort`: `top` or `new` (default) - choose comment sorting mode (on YouTube's side)
|
||||||
* `max_comments`: Limit the amount of comments to gather. Comma-separated list of integers representing `max-comments,max-parents,max-replies,max-replies-per-thread`. Default is `all,all,all,all`
|
* `max_comments`: Limit the amount of comments to gather. Comma-separated list of integers representing `max-comments,max-parents,max-replies,max-replies-per-thread`. Default is `all,all,all,all`
|
||||||
* E.g. `all,all,1000,10` will get a maximum of 1000 replies total, with up to 10 replies per thread. `1000,all,100` will get a maximum of 1000 comments, with a maximum of 100 replies total
|
* E.g. `all,all,1000,10` will get a maximum of 1000 replies total, with up to 10 replies per thread. `1000,all,100` will get a maximum of 1000 comments, with a maximum of 100 replies total
|
||||||
|
* `include_duplicate_formats`: Extract formats with identical content but different URLs or protocol. This is useful if some of the formats are unavailable or throttled.
|
||||||
* `include_incomplete_formats`: Extract formats that cannot be downloaded completely (live dash and post-live m3u8)
|
* `include_incomplete_formats`: Extract formats that cannot be downloaded completely (live dash and post-live m3u8)
|
||||||
* `innertube_host`: Innertube API host to use for all API requests; e.g. `studio.youtube.com`, `youtubei.googleapis.com`. Note that cookies exported from one subdomain will not work on others
|
* `innertube_host`: Innertube API host to use for all API requests; e.g. `studio.youtube.com`, `youtubei.googleapis.com`. Note that cookies exported from one subdomain will not work on others
|
||||||
* `innertube_key`: Innertube API key to use for all API requests
|
* `innertube_key`: Innertube API key to use for all API requests
|
||||||
@ -1760,7 +1815,10 @@ #### youtubetab (YouTube playlists, channels, feeds, etc.)
|
|||||||
* `approximate_date`: Extract approximate `upload_date` and `timestamp` in flat-playlist. This may cause date-based filters to be slightly off
|
* `approximate_date`: Extract approximate `upload_date` and `timestamp` in flat-playlist. This may cause date-based filters to be slightly off
|
||||||
|
|
||||||
#### generic
|
#### generic
|
||||||
* `fragment_query`: Passthrough any query in mpd/m3u8 manifest URLs to their fragments. Does not apply to ffmpeg
|
* `fragment_query`: Passthrough any query in mpd/m3u8 manifest URLs to their fragments if no value is provided, or else apply the query string given as `fragment_query=VALUE`. Does not apply to ffmpeg
|
||||||
|
* `variant_query`: Passthrough the master m3u8 URL query to its variant playlist URLs if no value is provided, or else apply the query string given as `variant_query=VALUE`
|
||||||
|
* `hls_key`: An HLS AES-128 key URI *or* key (as hex), and optionally the IV (as hex), in the form of `(URI|KEY)[,IV]`; e.g. `generic:hls_key=ABCDEF1234567980,0xFEDCBA0987654321`. Passing any of these values will force usage of the native HLS downloader and override the corresponding values found in the m3u8 playlist
|
||||||
|
* `is_live`: Bypass live HLS detection and manually set `live_status` - a value of `false` will set `not_live`, any other value (or no value) will set `is_live`
|
||||||
|
|
||||||
#### funimation
|
#### funimation
|
||||||
* `language`: Audio languages to extract, e.g. `funimation:language=english,japanese`
|
* `language`: Audio languages to extract, e.g. `funimation:language=english,japanese`
|
||||||
@ -1796,7 +1854,16 @@ #### rokfinchannel
|
|||||||
* `tab`: Which tab to download - one of `new`, `top`, `videos`, `podcasts`, `streams`, `stacks`
|
* `tab`: Which tab to download - one of `new`, `top`, `videos`, `podcasts`, `streams`, `stacks`
|
||||||
|
|
||||||
#### twitter
|
#### twitter
|
||||||
* `force_graphql`: Force usage of the GraphQL API. By default it will only be used if login cookies are provided
|
* `legacy_api`: Force usage of the legacy Twitter API instead of the GraphQL API for tweet extraction. Has no effect if login cookies are passed
|
||||||
|
|
||||||
|
#### wrestleuniverse
|
||||||
|
* `device_id`: UUID value assigned by the website and used to enforce device limits for paid livestream content. Can be found in browser local storage
|
||||||
|
|
||||||
|
#### twitch
|
||||||
|
* `client_id`: Client ID value to be sent with GraphQL requests, e.g. `twitch:client_id=kimne78kx3ncx6brgo4mv6wki5h1ko`
|
||||||
|
|
||||||
|
#### nhkradirulive (NHK らじる★らじる LIVE)
|
||||||
|
* `area`: Which regional variation to extract. Valid areas are: `sapporo`, `sendai`, `tokyo`, `nagoya`, `osaka`, `hiroshima`, `matsuyama`, `fukuoka`. Defaults to `tokyo`
|
||||||
|
|
||||||
**Note**: These options may be changed/removed in the future without concern for backward compatibility
|
**Note**: These options may be changed/removed in the future without concern for backward compatibility
|
||||||
|
|
||||||
@ -1843,7 +1910,7 @@ ## Installing Plugins
|
|||||||
* **System Plugins**
|
* **System Plugins**
|
||||||
* `/etc/yt-dlp/plugins/<package name>/yt_dlp_plugins/`
|
* `/etc/yt-dlp/plugins/<package name>/yt_dlp_plugins/`
|
||||||
* `/etc/yt-dlp-plugins/<package name>/yt_dlp_plugins/`
|
* `/etc/yt-dlp-plugins/<package name>/yt_dlp_plugins/`
|
||||||
2. **Executable location**: Plugin packages can similarly be installed in a `yt-dlp-plugins` directory under the executable location:
|
2. **Executable location**: Plugin packages can similarly be installed in a `yt-dlp-plugins` directory under the executable location (recommended for portable installations):
|
||||||
* Binary: where `<root-dir>/yt-dlp.exe`, `<root-dir>/yt-dlp-plugins/<package name>/yt_dlp_plugins/`
|
* Binary: where `<root-dir>/yt-dlp.exe`, `<root-dir>/yt-dlp-plugins/<package name>/yt_dlp_plugins/`
|
||||||
* Source: where `<root-dir>/yt_dlp/__main__.py`, `<root-dir>/yt-dlp-plugins/<package name>/yt_dlp_plugins/`
|
* Source: where `<root-dir>/yt_dlp/__main__.py`, `<root-dir>/yt-dlp-plugins/<package name>/yt_dlp_plugins/`
|
||||||
|
|
||||||
@ -1887,7 +1954,7 @@ # EMBEDDING YT-DLP
|
|||||||
ydl.download(URLS)
|
ydl.download(URLS)
|
||||||
```
|
```
|
||||||
|
|
||||||
Most likely, you'll want to use various options. For a list of options available, have a look at [`yt_dlp/YoutubeDL.py`](yt_dlp/YoutubeDL.py#L180).
|
Most likely, you'll want to use various options. For a list of options available, have a look at [`yt_dlp/YoutubeDL.py`](yt_dlp/YoutubeDL.py#L184).
|
||||||
|
|
||||||
**Tip**: If you are porting your code from youtube-dl to yt-dlp, one important point to look out for is that we do not guarantee the return value of `YoutubeDL.extract_info` to be json serializable, or even be a dictionary. It will be dictionary-like, but if you want to ensure it is a serializable dictionary, pass it through `YoutubeDL.sanitize_info` as shown in the [example below](#extracting-information)
|
**Tip**: If you are porting your code from youtube-dl to yt-dlp, one important point to look out for is that we do not guarantee the return value of `YoutubeDL.extract_info` to be json serializable, or even be a dictionary. It will be dictionary-like, but if you want to ensure it is a serializable dictionary, pass it through `YoutubeDL.sanitize_info` as shown in the [example below](#extracting-information)
|
||||||
|
|
||||||
@ -2031,7 +2098,7 @@ #### Use a custom format selector
|
|||||||
```python
|
```python
|
||||||
import yt_dlp
|
import yt_dlp
|
||||||
|
|
||||||
URL = ['https://www.youtube.com/watch?v=BaW_jenozKc']
|
URLS = ['https://www.youtube.com/watch?v=BaW_jenozKc']
|
||||||
|
|
||||||
def format_selector(ctx):
|
def format_selector(ctx):
|
||||||
""" Select the best video and the best audio that won't result in an mkv.
|
""" Select the best video and the best audio that won't result in an mkv.
|
||||||
@ -2097,12 +2164,14 @@ #### Redundant options
|
|||||||
--reject-title REGEX --match-filter "title !~= (?i)REGEX"
|
--reject-title REGEX --match-filter "title !~= (?i)REGEX"
|
||||||
--min-views COUNT --match-filter "view_count >=? COUNT"
|
--min-views COUNT --match-filter "view_count >=? COUNT"
|
||||||
--max-views COUNT --match-filter "view_count <=? COUNT"
|
--max-views COUNT --match-filter "view_count <=? COUNT"
|
||||||
|
--break-on-reject Use --break-match-filter
|
||||||
--user-agent UA --add-header "User-Agent:UA"
|
--user-agent UA --add-header "User-Agent:UA"
|
||||||
--referer URL --add-header "Referer:URL"
|
--referer URL --add-header "Referer:URL"
|
||||||
--playlist-start NUMBER -I NUMBER:
|
--playlist-start NUMBER -I NUMBER:
|
||||||
--playlist-end NUMBER -I :NUMBER
|
--playlist-end NUMBER -I :NUMBER
|
||||||
--playlist-reverse -I ::-1
|
--playlist-reverse -I ::-1
|
||||||
--no-playlist-reverse Default
|
--no-playlist-reverse Default
|
||||||
|
--no-colors --color no_color
|
||||||
|
|
||||||
|
|
||||||
#### Not recommended
|
#### Not recommended
|
||||||
@ -2126,6 +2195,10 @@ #### Not recommended
|
|||||||
--youtube-skip-hls-manifest --extractor-args "youtube:skip=hls" (Alias: --no-youtube-include-hls-manifest)
|
--youtube-skip-hls-manifest --extractor-args "youtube:skip=hls" (Alias: --no-youtube-include-hls-manifest)
|
||||||
--youtube-include-dash-manifest Default (Alias: --no-youtube-skip-dash-manifest)
|
--youtube-include-dash-manifest Default (Alias: --no-youtube-skip-dash-manifest)
|
||||||
--youtube-include-hls-manifest Default (Alias: --no-youtube-skip-hls-manifest)
|
--youtube-include-hls-manifest Default (Alias: --no-youtube-skip-hls-manifest)
|
||||||
|
--geo-bypass --xff "default"
|
||||||
|
--no-geo-bypass --xff "never"
|
||||||
|
--geo-bypass-country CODE --xff CODE
|
||||||
|
--geo-bypass-ip-block IP_BLOCK --xff IP_BLOCK
|
||||||
|
|
||||||
|
|
||||||
#### Developer options
|
#### Developer options
|
||||||
|
60
devscripts/changelog_override.json
Normal file
60
devscripts/changelog_override.json
Normal file
@ -0,0 +1,60 @@
|
|||||||
|
[
|
||||||
|
{
|
||||||
|
"action": "add",
|
||||||
|
"when": "776d1c3f0c9b00399896dd2e40e78e9a43218109",
|
||||||
|
"short": "[priority] **A new release type has been added!**\n * [`nightly`](https://github.com/yt-dlp/yt-dlp/releases/tag/nightly) builds will be made after each push, containing the latest fixes (but also possibly bugs).\n * When using `--update`/`-U`, a release binary will only update to its current channel (either `stable` or `nightly`).\n * The `--update-to` option has been added allowing the user more control over program upgrades (or downgrades).\n * `--update-to` can change the release channel (`stable`, `nightly`) and also upgrade or downgrade to specific tags.\n * **Usage**: `--update-to CHANNEL`, `--update-to TAG`, `--update-to CHANNEL@TAG`"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"action": "add",
|
||||||
|
"when": "776d1c3f0c9b00399896dd2e40e78e9a43218109",
|
||||||
|
"short": "[priority] **YouTube throttling fixes!**"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"action": "remove",
|
||||||
|
"when": "2e023649ea4e11151545a34dc1360c114981a236"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"action": "add",
|
||||||
|
"when": "01aba2519a0884ef17d5f85608dbd2a455577147",
|
||||||
|
"short": "[priority] YouTube: Improved throttling and signature fixes"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"action": "change",
|
||||||
|
"when": "c86e433c35fe5da6cb29f3539eef97497f84ed38",
|
||||||
|
"short": "[extractor/niconico:series] Fix extraction (#6898)",
|
||||||
|
"authors": ["sqrtNOT"]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"action": "change",
|
||||||
|
"when": "69a40e4a7f6caa5662527ebd2f3c4e8aa02857a2",
|
||||||
|
"short": "[extractor/youtube:music_search_url] Extract title (#7102)",
|
||||||
|
"authors": ["kangalio"]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"action": "change",
|
||||||
|
"when": "8417f26b8a819cd7ffcd4e000ca3e45033e670fb",
|
||||||
|
"short": "Add option `--color` (#6904)",
|
||||||
|
"authors": ["Grub4K"]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"action": "change",
|
||||||
|
"when": "7b37e8b23691613f331bd4ebc9d639dd6f93c972",
|
||||||
|
"short": "Improve `--download-sections`\n - Support negative time-ranges\n - Add `*from-url` to obey time-ranges in URL"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"action": "change",
|
||||||
|
"when": "1e75d97db21152acc764b30a688e516f04b8a142",
|
||||||
|
"short": "[extractor/youtube] Add `ios` to default clients used\n - IOS is affected neither by 403 nor by nsig so helps mitigate them preemptively\n - IOS also has higher bit-rate 'premium' formats though they are not labeled as such"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"action": "change",
|
||||||
|
"when": "f2ff0f6f1914b82d4a51681a72cc0828115dcb4a",
|
||||||
|
"short": "[extractor/motherless] Add gallery support, fix groups (#7211)",
|
||||||
|
"authors": ["rexlambert22", "Ti4eeT4e"]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"action": "change",
|
||||||
|
"when": "a4486bfc1dc7057efca9dd3fe70d7fa25c56f700",
|
||||||
|
"short": "[misc] Revert \"Add automatic duplicate issue detection\""
|
||||||
|
}
|
||||||
|
]
|
96
devscripts/changelog_override.schema.json
Normal file
96
devscripts/changelog_override.schema.json
Normal file
@ -0,0 +1,96 @@
|
|||||||
|
{
|
||||||
|
"$schema": "http://json-schema.org/draft/2020-12/schema",
|
||||||
|
"type": "array",
|
||||||
|
"uniqueItems": true,
|
||||||
|
"items": {
|
||||||
|
"type": "object",
|
||||||
|
"oneOf": [
|
||||||
|
{
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"action": {
|
||||||
|
"enum": [
|
||||||
|
"add"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"when": {
|
||||||
|
"type": "string",
|
||||||
|
"pattern": "^([0-9a-f]{40}|\\d{4}\\.\\d{2}\\.\\d{2})$"
|
||||||
|
},
|
||||||
|
"hash": {
|
||||||
|
"type": "string",
|
||||||
|
"pattern": "^[0-9a-f]{40}$"
|
||||||
|
},
|
||||||
|
"short": {
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
|
"authors": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {
|
||||||
|
"type": "string"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": [
|
||||||
|
"action",
|
||||||
|
"short"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"action": {
|
||||||
|
"enum": [
|
||||||
|
"remove"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"when": {
|
||||||
|
"type": "string",
|
||||||
|
"pattern": "^([0-9a-f]{40}|\\d{4}\\.\\d{2}\\.\\d{2})$"
|
||||||
|
},
|
||||||
|
"hash": {
|
||||||
|
"type": "string",
|
||||||
|
"pattern": "^[0-9a-f]{40}$"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": [
|
||||||
|
"action",
|
||||||
|
"hash"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"action": {
|
||||||
|
"enum": [
|
||||||
|
"change"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"when": {
|
||||||
|
"type": "string",
|
||||||
|
"pattern": "^([0-9a-f]{40}|\\d{4}\\.\\d{2}\\.\\d{2})$"
|
||||||
|
},
|
||||||
|
"hash": {
|
||||||
|
"type": "string",
|
||||||
|
"pattern": "^[0-9a-f]{40}$"
|
||||||
|
},
|
||||||
|
"short": {
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
|
"authors": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {
|
||||||
|
"type": "string"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": [
|
||||||
|
"action",
|
||||||
|
"hash",
|
||||||
|
"short",
|
||||||
|
"authors"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
48
devscripts/cli_to_api.py
Normal file
48
devscripts/cli_to_api.py
Normal file
@ -0,0 +1,48 @@
|
|||||||
|
# Allow direct execution
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
|
||||||
|
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||||
|
|
||||||
|
import yt_dlp
|
||||||
|
import yt_dlp.options
|
||||||
|
|
||||||
|
create_parser = yt_dlp.options.create_parser
|
||||||
|
|
||||||
|
|
||||||
|
def parse_patched_options(opts):
|
||||||
|
patched_parser = create_parser()
|
||||||
|
patched_parser.defaults.update({
|
||||||
|
'ignoreerrors': False,
|
||||||
|
'retries': 0,
|
||||||
|
'fragment_retries': 0,
|
||||||
|
'extract_flat': False,
|
||||||
|
'concat_playlist': 'never',
|
||||||
|
})
|
||||||
|
yt_dlp.options.create_parser = lambda: patched_parser
|
||||||
|
try:
|
||||||
|
return yt_dlp.parse_options(opts)
|
||||||
|
finally:
|
||||||
|
yt_dlp.options.create_parser = create_parser
|
||||||
|
|
||||||
|
|
||||||
|
default_opts = parse_patched_options([]).ydl_opts
|
||||||
|
|
||||||
|
|
||||||
|
def cli_to_api(opts, cli_defaults=False):
|
||||||
|
opts = (yt_dlp.parse_options if cli_defaults else parse_patched_options)(opts).ydl_opts
|
||||||
|
|
||||||
|
diff = {k: v for k, v in opts.items() if default_opts[k] != v}
|
||||||
|
if 'postprocessors' in diff:
|
||||||
|
diff['postprocessors'] = [pp for pp in diff['postprocessors']
|
||||||
|
if pp not in default_opts['postprocessors']]
|
||||||
|
return diff
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
from pprint import pprint
|
||||||
|
|
||||||
|
print('\nThe arguments passed translate to:\n')
|
||||||
|
pprint(cli_to_api(sys.argv[1:]))
|
||||||
|
print('\nCombining these with the CLI defaults gives:\n')
|
||||||
|
pprint(cli_to_api(sys.argv[1:], True))
|
495
devscripts/make_changelog.py
Normal file
495
devscripts/make_changelog.py
Normal file
@ -0,0 +1,495 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
# Allow direct execution
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
|
||||||
|
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||||
|
|
||||||
|
import enum
|
||||||
|
import itertools
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import re
|
||||||
|
from collections import defaultdict
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from functools import lru_cache
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from devscripts.utils import read_file, run_process, write_file
|
||||||
|
|
||||||
|
BASE_URL = 'https://github.com'
|
||||||
|
LOCATION_PATH = Path(__file__).parent
|
||||||
|
HASH_LENGTH = 7
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class CommitGroup(enum.Enum):
|
||||||
|
PRIORITY = 'Important'
|
||||||
|
CORE = 'Core'
|
||||||
|
EXTRACTOR = 'Extractor'
|
||||||
|
DOWNLOADER = 'Downloader'
|
||||||
|
POSTPROCESSOR = 'Postprocessor'
|
||||||
|
MISC = 'Misc.'
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
@property
|
||||||
|
def ignorable_prefixes(cls):
|
||||||
|
return ('core', 'downloader', 'extractor', 'misc', 'postprocessor', 'upstream')
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
@lru_cache
|
||||||
|
def commit_lookup(cls):
|
||||||
|
return {
|
||||||
|
name: group
|
||||||
|
for group, names in {
|
||||||
|
cls.PRIORITY: {'priority'},
|
||||||
|
cls.CORE: {
|
||||||
|
'aes',
|
||||||
|
'cache',
|
||||||
|
'compat_utils',
|
||||||
|
'compat',
|
||||||
|
'cookies',
|
||||||
|
'core',
|
||||||
|
'dependencies',
|
||||||
|
'jsinterp',
|
||||||
|
'outtmpl',
|
||||||
|
'plugins',
|
||||||
|
'update',
|
||||||
|
'upstream',
|
||||||
|
'utils',
|
||||||
|
},
|
||||||
|
cls.MISC: {
|
||||||
|
'build',
|
||||||
|
'cleanup',
|
||||||
|
'devscripts',
|
||||||
|
'docs',
|
||||||
|
'misc',
|
||||||
|
'test',
|
||||||
|
},
|
||||||
|
cls.EXTRACTOR: {'extractor'},
|
||||||
|
cls.DOWNLOADER: {'downloader'},
|
||||||
|
cls.POSTPROCESSOR: {'postprocessor'},
|
||||||
|
}.items()
|
||||||
|
for name in names
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def get(cls, value):
|
||||||
|
result = cls.commit_lookup().get(value)
|
||||||
|
if result:
|
||||||
|
logger.debug(f'Mapped {value!r} => {result.name}')
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class Commit:
|
||||||
|
hash: str | None
|
||||||
|
short: str
|
||||||
|
authors: list[str]
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
result = f'{self.short!r}'
|
||||||
|
|
||||||
|
if self.hash:
|
||||||
|
result += f' ({self.hash[:HASH_LENGTH]})'
|
||||||
|
|
||||||
|
if self.authors:
|
||||||
|
authors = ', '.join(self.authors)
|
||||||
|
result += f' by {authors}'
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class CommitInfo:
|
||||||
|
details: str | None
|
||||||
|
sub_details: tuple[str, ...]
|
||||||
|
message: str
|
||||||
|
issues: list[str]
|
||||||
|
commit: Commit
|
||||||
|
fixes: list[Commit]
|
||||||
|
|
||||||
|
def key(self):
|
||||||
|
return ((self.details or '').lower(), self.sub_details, self.message)
|
||||||
|
|
||||||
|
|
||||||
|
def unique(items):
|
||||||
|
return sorted({item.strip().lower(): item for item in items if item}.values())
|
||||||
|
|
||||||
|
|
||||||
|
class Changelog:
|
||||||
|
MISC_RE = re.compile(r'(?:^|\b)(?:lint(?:ing)?|misc|format(?:ting)?|fixes)(?:\b|$)', re.IGNORECASE)
|
||||||
|
ALWAYS_SHOWN = (CommitGroup.PRIORITY,)
|
||||||
|
|
||||||
|
def __init__(self, groups, repo, collapsible=False):
|
||||||
|
self._groups = groups
|
||||||
|
self._repo = repo
|
||||||
|
self._collapsible = collapsible
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
return '\n'.join(self._format_groups(self._groups)).replace('\t', ' ')
|
||||||
|
|
||||||
|
def _format_groups(self, groups):
|
||||||
|
first = True
|
||||||
|
for item in CommitGroup:
|
||||||
|
if self._collapsible and item not in self.ALWAYS_SHOWN and first:
|
||||||
|
first = False
|
||||||
|
yield '\n<details><summary><h3>Changelog</h3></summary>\n'
|
||||||
|
|
||||||
|
group = groups[item]
|
||||||
|
if group:
|
||||||
|
yield self.format_module(item.value, group)
|
||||||
|
|
||||||
|
if self._collapsible:
|
||||||
|
yield '\n</details>'
|
||||||
|
|
||||||
|
def format_module(self, name, group):
|
||||||
|
result = f'\n#### {name} changes\n' if name else '\n'
|
||||||
|
return result + '\n'.join(self._format_group(group))
|
||||||
|
|
||||||
|
def _format_group(self, group):
|
||||||
|
sorted_group = sorted(group, key=CommitInfo.key)
|
||||||
|
detail_groups = itertools.groupby(sorted_group, lambda item: (item.details or '').lower())
|
||||||
|
for _, items in detail_groups:
|
||||||
|
items = list(items)
|
||||||
|
details = items[0].details
|
||||||
|
|
||||||
|
if details == 'cleanup':
|
||||||
|
items = self._prepare_cleanup_misc_items(items)
|
||||||
|
|
||||||
|
prefix = '-'
|
||||||
|
if details:
|
||||||
|
if len(items) == 1:
|
||||||
|
prefix = f'- **{details}**:'
|
||||||
|
else:
|
||||||
|
yield f'- **{details}**'
|
||||||
|
prefix = '\t-'
|
||||||
|
|
||||||
|
sub_detail_groups = itertools.groupby(items, lambda item: tuple(map(str.lower, item.sub_details)))
|
||||||
|
for sub_details, entries in sub_detail_groups:
|
||||||
|
if not sub_details:
|
||||||
|
for entry in entries:
|
||||||
|
yield f'{prefix} {self.format_single_change(entry)}'
|
||||||
|
continue
|
||||||
|
|
||||||
|
entries = list(entries)
|
||||||
|
sub_prefix = f'{prefix} {", ".join(entries[0].sub_details)}'
|
||||||
|
if len(entries) == 1:
|
||||||
|
yield f'{sub_prefix}: {self.format_single_change(entries[0])}'
|
||||||
|
continue
|
||||||
|
|
||||||
|
yield sub_prefix
|
||||||
|
for entry in entries:
|
||||||
|
yield f'\t{prefix} {self.format_single_change(entry)}'
|
||||||
|
|
||||||
|
def _prepare_cleanup_misc_items(self, items):
|
||||||
|
cleanup_misc_items = defaultdict(list)
|
||||||
|
sorted_items = []
|
||||||
|
for item in items:
|
||||||
|
if self.MISC_RE.search(item.message):
|
||||||
|
cleanup_misc_items[tuple(item.commit.authors)].append(item)
|
||||||
|
else:
|
||||||
|
sorted_items.append(item)
|
||||||
|
|
||||||
|
for commit_infos in cleanup_misc_items.values():
|
||||||
|
sorted_items.append(CommitInfo(
|
||||||
|
'cleanup', ('Miscellaneous',), ', '.join(
|
||||||
|
self._format_message_link(None, info.commit.hash).strip()
|
||||||
|
for info in sorted(commit_infos, key=lambda item: item.commit.hash or '')),
|
||||||
|
[], Commit(None, '', commit_infos[0].commit.authors), []))
|
||||||
|
|
||||||
|
return sorted_items
|
||||||
|
|
||||||
|
def format_single_change(self, info):
|
||||||
|
message = self._format_message_link(info.message, info.commit.hash)
|
||||||
|
if info.issues:
|
||||||
|
message = message.replace('\n', f' ({self._format_issues(info.issues)})\n', 1)
|
||||||
|
|
||||||
|
if info.commit.authors:
|
||||||
|
message = message.replace('\n', f' by {self._format_authors(info.commit.authors)}\n', 1)
|
||||||
|
|
||||||
|
if info.fixes:
|
||||||
|
fix_message = ', '.join(f'{self._format_message_link(None, fix.hash)}' for fix in info.fixes)
|
||||||
|
|
||||||
|
authors = sorted({author for fix in info.fixes for author in fix.authors}, key=str.casefold)
|
||||||
|
if authors != info.commit.authors:
|
||||||
|
fix_message = f'{fix_message} by {self._format_authors(authors)}'
|
||||||
|
|
||||||
|
message = message.replace('\n', f' (With fixes in {fix_message})\n', 1)
|
||||||
|
|
||||||
|
return message[:-1]
|
||||||
|
|
||||||
|
def _format_message_link(self, message, hash):
|
||||||
|
assert message or hash, 'Improperly defined commit message or override'
|
||||||
|
message = message if message else hash[:HASH_LENGTH]
|
||||||
|
if not hash:
|
||||||
|
return f'{message}\n'
|
||||||
|
return f'[{message}\n'.replace('\n', f']({self.repo_url}/commit/{hash})\n', 1)
|
||||||
|
|
||||||
|
def _format_issues(self, issues):
|
||||||
|
return ', '.join(f'[#{issue}]({self.repo_url}/issues/{issue})' for issue in issues)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _format_authors(authors):
|
||||||
|
return ', '.join(f'[{author}]({BASE_URL}/{author})' for author in authors)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def repo_url(self):
|
||||||
|
return f'{BASE_URL}/{self._repo}'
|
||||||
|
|
||||||
|
|
||||||
|
class CommitRange:
|
||||||
|
COMMAND = 'git'
|
||||||
|
COMMIT_SEPARATOR = '-----'
|
||||||
|
|
||||||
|
AUTHOR_INDICATOR_RE = re.compile(r'Authored by:? ', re.IGNORECASE)
|
||||||
|
MESSAGE_RE = re.compile(r'''
|
||||||
|
(?:\[(?P<prefix>[^\]]+)\]\ )?
|
||||||
|
(?:(?P<sub_details>`?[^:`]+`?): )?
|
||||||
|
(?P<message>.+?)
|
||||||
|
(?:\ \((?P<issues>\#\d+(?:,\ \#\d+)*)\))?
|
||||||
|
''', re.VERBOSE | re.DOTALL)
|
||||||
|
EXTRACTOR_INDICATOR_RE = re.compile(r'(?:Fix|Add)\s+Extractors?', re.IGNORECASE)
|
||||||
|
FIXES_RE = re.compile(r'(?i:Fix(?:es)?(?:\s+bugs?)?(?:\s+in|\s+for)?|Revert)\s+([\da-f]{40})')
|
||||||
|
UPSTREAM_MERGE_RE = re.compile(r'Update to ytdl-commit-([\da-f]+)')
|
||||||
|
|
||||||
|
def __init__(self, start, end, default_author=None):
|
||||||
|
self._start, self._end = start, end
|
||||||
|
self._commits, self._fixes = self._get_commits_and_fixes(default_author)
|
||||||
|
self._commits_added = []
|
||||||
|
|
||||||
|
def __iter__(self):
|
||||||
|
return iter(itertools.chain(self._commits.values(), self._commits_added))
|
||||||
|
|
||||||
|
def __len__(self):
|
||||||
|
return len(self._commits) + len(self._commits_added)
|
||||||
|
|
||||||
|
def __contains__(self, commit):
|
||||||
|
if isinstance(commit, Commit):
|
||||||
|
if not commit.hash:
|
||||||
|
return False
|
||||||
|
commit = commit.hash
|
||||||
|
|
||||||
|
return commit in self._commits
|
||||||
|
|
||||||
|
def _get_commits_and_fixes(self, default_author):
|
||||||
|
result = run_process(
|
||||||
|
self.COMMAND, 'log', f'--format=%H%n%s%n%b%n{self.COMMIT_SEPARATOR}',
|
||||||
|
f'{self._start}..{self._end}' if self._start else self._end).stdout
|
||||||
|
|
||||||
|
commits = {}
|
||||||
|
fixes = defaultdict(list)
|
||||||
|
lines = iter(result.splitlines(False))
|
||||||
|
for i, commit_hash in enumerate(lines):
|
||||||
|
short = next(lines)
|
||||||
|
skip = short.startswith('Release ') or short == '[version] update'
|
||||||
|
|
||||||
|
authors = [default_author] if default_author else []
|
||||||
|
for line in iter(lambda: next(lines), self.COMMIT_SEPARATOR):
|
||||||
|
match = self.AUTHOR_INDICATOR_RE.match(line)
|
||||||
|
if match:
|
||||||
|
authors = sorted(map(str.strip, line[match.end():].split(',')), key=str.casefold)
|
||||||
|
|
||||||
|
commit = Commit(commit_hash, short, authors)
|
||||||
|
if skip and (self._start or not i):
|
||||||
|
logger.debug(f'Skipped commit: {commit}')
|
||||||
|
continue
|
||||||
|
elif skip:
|
||||||
|
logger.debug(f'Reached Release commit, breaking: {commit}')
|
||||||
|
break
|
||||||
|
|
||||||
|
fix_match = self.FIXES_RE.search(commit.short)
|
||||||
|
if fix_match:
|
||||||
|
commitish = fix_match.group(1)
|
||||||
|
fixes[commitish].append(commit)
|
||||||
|
|
||||||
|
commits[commit.hash] = commit
|
||||||
|
|
||||||
|
for commitish, fix_commits in fixes.items():
|
||||||
|
if commitish in commits:
|
||||||
|
hashes = ', '.join(commit.hash[:HASH_LENGTH] for commit in fix_commits)
|
||||||
|
logger.info(f'Found fix(es) for {commitish[:HASH_LENGTH]}: {hashes}')
|
||||||
|
for fix_commit in fix_commits:
|
||||||
|
del commits[fix_commit.hash]
|
||||||
|
else:
|
||||||
|
logger.debug(f'Commit with fixes not in changes: {commitish[:HASH_LENGTH]}')
|
||||||
|
|
||||||
|
return commits, fixes
|
||||||
|
|
||||||
|
def apply_overrides(self, overrides):
|
||||||
|
for override in overrides:
|
||||||
|
when = override.get('when')
|
||||||
|
if when and when not in self and when != self._start:
|
||||||
|
logger.debug(f'Ignored {when!r}, not in commits {self._start!r}')
|
||||||
|
continue
|
||||||
|
|
||||||
|
override_hash = override.get('hash') or when
|
||||||
|
if override['action'] == 'add':
|
||||||
|
commit = Commit(override.get('hash'), override['short'], override.get('authors') or [])
|
||||||
|
logger.info(f'ADD {commit}')
|
||||||
|
self._commits_added.append(commit)
|
||||||
|
|
||||||
|
elif override['action'] == 'remove':
|
||||||
|
if override_hash in self._commits:
|
||||||
|
logger.info(f'REMOVE {self._commits[override_hash]}')
|
||||||
|
del self._commits[override_hash]
|
||||||
|
|
||||||
|
elif override['action'] == 'change':
|
||||||
|
if override_hash not in self._commits:
|
||||||
|
continue
|
||||||
|
commit = Commit(override_hash, override['short'], override.get('authors') or [])
|
||||||
|
logger.info(f'CHANGE {self._commits[commit.hash]} -> {commit}')
|
||||||
|
self._commits[commit.hash] = commit
|
||||||
|
|
||||||
|
self._commits = {key: value for key, value in reversed(self._commits.items())}
|
||||||
|
|
||||||
|
def groups(self):
|
||||||
|
group_dict = defaultdict(list)
|
||||||
|
for commit in self:
|
||||||
|
upstream_re = self.UPSTREAM_MERGE_RE.search(commit.short)
|
||||||
|
if upstream_re:
|
||||||
|
commit.short = f'[core/upstream] Merged with youtube-dl {upstream_re.group(1)}'
|
||||||
|
|
||||||
|
match = self.MESSAGE_RE.fullmatch(commit.short)
|
||||||
|
if not match:
|
||||||
|
logger.error(f'Error parsing short commit message: {commit.short!r}')
|
||||||
|
continue
|
||||||
|
|
||||||
|
prefix, sub_details_alt, message, issues = match.groups()
|
||||||
|
issues = [issue.strip()[1:] for issue in issues.split(',')] if issues else []
|
||||||
|
|
||||||
|
if prefix:
|
||||||
|
groups, details, sub_details = zip(*map(self.details_from_prefix, prefix.split(',')))
|
||||||
|
group = next(iter(filter(None, groups)), None)
|
||||||
|
details = ', '.join(unique(details))
|
||||||
|
sub_details = list(itertools.chain.from_iterable(sub_details))
|
||||||
|
else:
|
||||||
|
group = CommitGroup.CORE
|
||||||
|
details = None
|
||||||
|
sub_details = []
|
||||||
|
|
||||||
|
if sub_details_alt:
|
||||||
|
sub_details.append(sub_details_alt)
|
||||||
|
sub_details = tuple(unique(sub_details))
|
||||||
|
|
||||||
|
if not group:
|
||||||
|
if self.EXTRACTOR_INDICATOR_RE.search(commit.short):
|
||||||
|
group = CommitGroup.EXTRACTOR
|
||||||
|
else:
|
||||||
|
group = CommitGroup.POSTPROCESSOR
|
||||||
|
logger.warning(f'Failed to map {commit.short!r}, selected {group.name.lower()}')
|
||||||
|
|
||||||
|
commit_info = CommitInfo(
|
||||||
|
details, sub_details, message.strip(),
|
||||||
|
issues, commit, self._fixes[commit.hash])
|
||||||
|
|
||||||
|
logger.debug(f'Resolved {commit.short!r} to {commit_info!r}')
|
||||||
|
group_dict[group].append(commit_info)
|
||||||
|
|
||||||
|
return group_dict
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def details_from_prefix(prefix):
|
||||||
|
if not prefix:
|
||||||
|
return CommitGroup.CORE, None, ()
|
||||||
|
|
||||||
|
prefix, _, details = prefix.partition('/')
|
||||||
|
prefix = prefix.strip()
|
||||||
|
details = details.strip()
|
||||||
|
|
||||||
|
group = CommitGroup.get(prefix.lower())
|
||||||
|
if group is CommitGroup.PRIORITY:
|
||||||
|
prefix, _, details = details.partition('/')
|
||||||
|
|
||||||
|
if not details and prefix and prefix not in CommitGroup.ignorable_prefixes:
|
||||||
|
logger.debug(f'Replaced details with {prefix!r}')
|
||||||
|
details = prefix or None
|
||||||
|
|
||||||
|
if details == 'common':
|
||||||
|
details = None
|
||||||
|
|
||||||
|
if details:
|
||||||
|
details, *sub_details = details.split(':')
|
||||||
|
else:
|
||||||
|
sub_details = []
|
||||||
|
|
||||||
|
return group, details, sub_details
|
||||||
|
|
||||||
|
|
||||||
|
def get_new_contributors(contributors_path, commits):
|
||||||
|
contributors = set()
|
||||||
|
if contributors_path.exists():
|
||||||
|
for line in read_file(contributors_path).splitlines():
|
||||||
|
author, _, _ = line.strip().partition(' (')
|
||||||
|
authors = author.split('/')
|
||||||
|
contributors.update(map(str.casefold, authors))
|
||||||
|
|
||||||
|
new_contributors = set()
|
||||||
|
for commit in commits:
|
||||||
|
for author in commit.authors:
|
||||||
|
author_folded = author.casefold()
|
||||||
|
if author_folded not in contributors:
|
||||||
|
contributors.add(author_folded)
|
||||||
|
new_contributors.add(author)
|
||||||
|
|
||||||
|
return sorted(new_contributors, key=str.casefold)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
import argparse
|
||||||
|
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description='Create a changelog markdown from a git commit range')
|
||||||
|
parser.add_argument(
|
||||||
|
'commitish', default='HEAD', nargs='?',
|
||||||
|
help='The commitish to create the range from (default: %(default)s)')
|
||||||
|
parser.add_argument(
|
||||||
|
'-v', '--verbosity', action='count', default=0,
|
||||||
|
help='increase verbosity (can be used twice)')
|
||||||
|
parser.add_argument(
|
||||||
|
'-c', '--contributors', action='store_true',
|
||||||
|
help='update CONTRIBUTORS file (default: %(default)s)')
|
||||||
|
parser.add_argument(
|
||||||
|
'--contributors-path', type=Path, default=LOCATION_PATH.parent / 'CONTRIBUTORS',
|
||||||
|
help='path to the CONTRIBUTORS file')
|
||||||
|
parser.add_argument(
|
||||||
|
'--no-override', action='store_true',
|
||||||
|
help='skip override json in commit generation (default: %(default)s)')
|
||||||
|
parser.add_argument(
|
||||||
|
'--override-path', type=Path, default=LOCATION_PATH / 'changelog_override.json',
|
||||||
|
help='path to the changelog_override.json file')
|
||||||
|
parser.add_argument(
|
||||||
|
'--default-author', default='pukkandan',
|
||||||
|
help='the author to use without a author indicator (default: %(default)s)')
|
||||||
|
parser.add_argument(
|
||||||
|
'--repo', default='yt-dlp/yt-dlp',
|
||||||
|
help='the github repository to use for the operations (default: %(default)s)')
|
||||||
|
parser.add_argument(
|
||||||
|
'--collapsible', action='store_true',
|
||||||
|
help='make changelog collapsible (default: %(default)s)')
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
logging.basicConfig(
|
||||||
|
datefmt='%Y-%m-%d %H-%M-%S', format='{asctime} | {levelname:<8} | {message}',
|
||||||
|
level=logging.WARNING - 10 * args.verbosity, style='{', stream=sys.stderr)
|
||||||
|
|
||||||
|
commits = CommitRange(None, args.commitish, args.default_author)
|
||||||
|
|
||||||
|
if not args.no_override:
|
||||||
|
if args.override_path.exists():
|
||||||
|
overrides = json.loads(read_file(args.override_path))
|
||||||
|
commits.apply_overrides(overrides)
|
||||||
|
else:
|
||||||
|
logger.warning(f'File {args.override_path.as_posix()} does not exist')
|
||||||
|
|
||||||
|
logger.info(f'Loaded {len(commits)} commits')
|
||||||
|
|
||||||
|
new_contributors = get_new_contributors(args.contributors_path, commits)
|
||||||
|
if new_contributors:
|
||||||
|
if args.contributors:
|
||||||
|
write_file(args.contributors_path, '\n'.join(new_contributors) + '\n', mode='a')
|
||||||
|
logger.info(f'New contributors: {", ".join(new_contributors)}')
|
||||||
|
|
||||||
|
print(Changelog(commits.groups(), args.repo, args.collapsible))
|
@ -24,6 +24,8 @@
|
|||||||
options:
|
options:
|
||||||
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
||||||
required: true
|
required: true
|
||||||
|
- label: "If using API, add `'verbose': True` to `YoutubeDL` params instead"
|
||||||
|
required: false
|
||||||
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
||||||
required: true
|
required: true
|
||||||
- type: textarea
|
- type: textarea
|
||||||
@ -58,7 +60,7 @@
|
|||||||
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
||||||
description: Fill all fields even if you think it is irrelevant for the issue
|
description: Fill all fields even if you think it is irrelevant for the issue
|
||||||
options:
|
options:
|
||||||
- label: I understand that I will be **blocked** if I remove or skip any mandatory\\* field
|
- label: I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field
|
||||||
required: true
|
required: true
|
||||||
'''.strip()
|
'''.strip()
|
||||||
|
|
||||||
|
@ -45,33 +45,43 @@ def apply_patch(text, patch):
|
|||||||
delim = f'\n{" " * switch_col_width}'
|
delim = f'\n{" " * switch_col_width}'
|
||||||
|
|
||||||
PATCHES = (
|
PATCHES = (
|
||||||
( # Standardize update message
|
( # Standardize `--update` message
|
||||||
r'(?m)^( -U, --update\s+).+(\n \s.+)*$',
|
r'(?m)^( -U, --update\s+).+(\n \s.+)*$',
|
||||||
r'\1Update this program to the latest version',
|
r'\1Update this program to the latest version',
|
||||||
),
|
),
|
||||||
( # Headings
|
( # Headings
|
||||||
r'(?m)^ (\w.+\n)( (?=\w))?',
|
r'(?m)^ (\w.+\n)( (?=\w))?',
|
||||||
r'## \1'
|
r'## \1'
|
||||||
),
|
),
|
||||||
( # Do not split URLs
|
( # Fixup `--date` formatting
|
||||||
|
rf'(?m)( --date DATE.+({delim}[^\[]+)*)\[.+({delim}.+)*$',
|
||||||
|
(rf'\1[now|today|yesterday][-N[day|week|month|year]].{delim}'
|
||||||
|
f'E.g. "--date today-2weeks" downloads only{delim}'
|
||||||
|
'videos uploaded on the same day two weeks ago'),
|
||||||
|
),
|
||||||
|
( # Do not split URLs
|
||||||
rf'({delim[:-1]})? (?P<label>\[\S+\] )?(?P<url>https?({delim})?:({delim})?/({delim})?/(({delim})?\S+)+)\s',
|
rf'({delim[:-1]})? (?P<label>\[\S+\] )?(?P<url>https?({delim})?:({delim})?/({delim})?/(({delim})?\S+)+)\s',
|
||||||
lambda mobj: ''.join((delim, mobj.group('label') or '', re.sub(r'\s+', '', mobj.group('url')), '\n'))
|
lambda mobj: ''.join((delim, mobj.group('label') or '', re.sub(r'\s+', '', mobj.group('url')), '\n'))
|
||||||
),
|
),
|
||||||
( # Do not split "words"
|
( # Do not split "words"
|
||||||
rf'(?m)({delim}\S+)+$',
|
rf'(?m)({delim}\S+)+$',
|
||||||
lambda mobj: ''.join((delim, mobj.group(0).replace(delim, '')))
|
lambda mobj: ''.join((delim, mobj.group(0).replace(delim, '')))
|
||||||
),
|
),
|
||||||
( # Allow overshooting last line
|
( # Allow overshooting last line
|
||||||
rf'(?m)^(?P<prev>.+)${delim}(?P<current>.+)$(?!{delim})',
|
rf'(?m)^(?P<prev>.+)${delim}(?P<current>.+)$(?!{delim})',
|
||||||
lambda mobj: (mobj.group().replace(delim, ' ')
|
lambda mobj: (mobj.group().replace(delim, ' ')
|
||||||
if len(mobj.group()) - len(delim) + 1 <= max_width + ALLOWED_OVERSHOOT
|
if len(mobj.group()) - len(delim) + 1 <= max_width + ALLOWED_OVERSHOOT
|
||||||
else mobj.group())
|
else mobj.group())
|
||||||
),
|
),
|
||||||
( # Avoid newline when a space is available b/w switch and description
|
( # Avoid newline when a space is available b/w switch and description
|
||||||
DISABLE_PATCH, # This creates issues with prepare_manpage
|
DISABLE_PATCH, # This creates issues with prepare_manpage
|
||||||
r'(?m)^(\s{4}-.{%d})(%s)' % (switch_col_width - 6, delim),
|
r'(?m)^(\s{4}-.{%d})(%s)' % (switch_col_width - 6, delim),
|
||||||
r'\1 '
|
r'\1 '
|
||||||
),
|
),
|
||||||
|
( # Replace brackets with a Markdown link
|
||||||
|
r'SponsorBlock API \((http.+)\)',
|
||||||
|
r'[SponsorBlock API](\1)'
|
||||||
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
readme = read_file(README_FILE)
|
readme = read_file(README_FILE)
|
||||||
|
@ -7,16 +7,17 @@
|
|||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||||
|
|
||||||
|
|
||||||
|
import argparse
|
||||||
import contextlib
|
import contextlib
|
||||||
import subprocess
|
|
||||||
import sys
|
import sys
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
|
||||||
from devscripts.utils import read_version, write_file
|
from devscripts.utils import read_version, run_process, write_file
|
||||||
|
|
||||||
|
|
||||||
def get_new_version(revision):
|
def get_new_version(version, revision):
|
||||||
version = datetime.utcnow().strftime('%Y.%m.%d')
|
if not version:
|
||||||
|
version = datetime.utcnow().strftime('%Y.%m.%d')
|
||||||
|
|
||||||
if revision:
|
if revision:
|
||||||
assert revision.isdigit(), 'Revision must be a number'
|
assert revision.isdigit(), 'Revision must be a number'
|
||||||
@ -30,27 +31,41 @@ def get_new_version(revision):
|
|||||||
|
|
||||||
def get_git_head():
|
def get_git_head():
|
||||||
with contextlib.suppress(Exception):
|
with contextlib.suppress(Exception):
|
||||||
sp = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'], stdout=subprocess.PIPE)
|
return run_process('git', 'rev-parse', 'HEAD').stdout.strip()
|
||||||
return sp.communicate()[0].decode().strip() or None
|
|
||||||
|
|
||||||
|
|
||||||
VERSION = get_new_version((sys.argv + [''])[1])
|
VERSION_TEMPLATE = '''\
|
||||||
GIT_HEAD = get_git_head()
|
|
||||||
|
|
||||||
VERSION_FILE = f'''\
|
|
||||||
# Autogenerated by devscripts/update-version.py
|
# Autogenerated by devscripts/update-version.py
|
||||||
|
|
||||||
__version__ = {VERSION!r}
|
__version__ = {version!r}
|
||||||
|
|
||||||
RELEASE_GIT_HEAD = {GIT_HEAD!r}
|
RELEASE_GIT_HEAD = {git_head!r}
|
||||||
|
|
||||||
VARIANT = None
|
VARIANT = None
|
||||||
|
|
||||||
UPDATE_HINT = None
|
UPDATE_HINT = None
|
||||||
|
|
||||||
|
CHANNEL = {channel!r}
|
||||||
'''
|
'''
|
||||||
|
|
||||||
write_file('yt_dlp/version.py', VERSION_FILE)
|
if __name__ == '__main__':
|
||||||
github_output = os.getenv('GITHUB_OUTPUT')
|
parser = argparse.ArgumentParser(description='Update the version.py file')
|
||||||
if github_output:
|
parser.add_argument(
|
||||||
write_file(github_output, f'ytdlp_version={VERSION}\n', 'a')
|
'-c', '--channel', default='stable',
|
||||||
print(f'\nVersion = {VERSION}, Git HEAD = {GIT_HEAD}')
|
help='Select update channel (default: %(default)s)')
|
||||||
|
parser.add_argument(
|
||||||
|
'-o', '--output', default='yt_dlp/version.py',
|
||||||
|
help='The output file to write to (default: %(default)s)')
|
||||||
|
parser.add_argument(
|
||||||
|
'version', nargs='?', default=None,
|
||||||
|
help='A version or revision to use instead of generating one')
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
git_head = get_git_head()
|
||||||
|
version = (
|
||||||
|
args.version if args.version and '.' in args.version
|
||||||
|
else get_new_version(None, args.version))
|
||||||
|
write_file(args.output, VERSION_TEMPLATE.format(
|
||||||
|
version=version, git_head=git_head, channel=args.channel))
|
||||||
|
|
||||||
|
print(f'version={version} ({args.channel}), head={git_head}')
|
||||||
|
@ -1,5 +1,6 @@
|
|||||||
import argparse
|
import argparse
|
||||||
import functools
|
import functools
|
||||||
|
import subprocess
|
||||||
|
|
||||||
|
|
||||||
def read_file(fname):
|
def read_file(fname):
|
||||||
@ -12,8 +13,8 @@ def write_file(fname, content, mode='w'):
|
|||||||
return f.write(content)
|
return f.write(content)
|
||||||
|
|
||||||
|
|
||||||
# Get the version without importing the package
|
|
||||||
def read_version(fname='yt_dlp/version.py'):
|
def read_version(fname='yt_dlp/version.py'):
|
||||||
|
"""Get the version without importing the package"""
|
||||||
exec(compile(read_file(fname), fname, 'exec'))
|
exec(compile(read_file(fname), fname, 'exec'))
|
||||||
return locals()['__version__']
|
return locals()['__version__']
|
||||||
|
|
||||||
@ -33,3 +34,13 @@ def get_filename_args(has_infile=False, default_outfile=None):
|
|||||||
|
|
||||||
def compose_functions(*functions):
|
def compose_functions(*functions):
|
||||||
return lambda x: functools.reduce(lambda y, f: f(y), functions, x)
|
return lambda x: functools.reduce(lambda y, f: f(y), functions, x)
|
||||||
|
|
||||||
|
|
||||||
|
def run_process(*args, **kwargs):
|
||||||
|
kwargs.setdefault('text', True)
|
||||||
|
kwargs.setdefault('check', True)
|
||||||
|
kwargs.setdefault('capture_output', True)
|
||||||
|
if kwargs['text']:
|
||||||
|
kwargs.setdefault('encoding', 'utf-8')
|
||||||
|
kwargs.setdefault('errors', 'replace')
|
||||||
|
return subprocess.run(args, **kwargs)
|
||||||
|
29
public.key
Normal file
29
public.key
Normal file
@ -0,0 +1,29 @@
|
|||||||
|
-----BEGIN PGP PUBLIC KEY BLOCK-----
|
||||||
|
|
||||||
|
mQINBGP78C4BEAD0rF9zjGPAt0thlt5C1ebzccAVX7Nb1v+eqQjk+WEZdTETVCg3
|
||||||
|
WAM5ngArlHdm/fZqzUgO+pAYrB60GKeg7ffUDf+S0XFKEZdeRLYeAaqqKhSibVal
|
||||||
|
DjvOBOztu3W607HLETQAqA7wTPuIt2WqmpL60NIcyr27LxqmgdN3mNvZ2iLO+bP0
|
||||||
|
nKR/C+PgE9H4ytywDa12zMx6PmZCnVOOOu6XZEFmdUxxdQ9fFDqd9LcBKY2LDOcS
|
||||||
|
Yo1saY0YWiZWHtzVoZu1kOzjnS5Fjq/yBHJLImDH7pNxHm7s/PnaurpmQFtDFruk
|
||||||
|
t+2lhDnpKUmGr/I/3IHqH/X+9nPoS4uiqQ5HpblB8BK+4WfpaiEg75LnvuOPfZIP
|
||||||
|
KYyXa/0A7QojMwgOrD88ozT+VCkKkkJ+ijXZ7gHNjmcBaUdKK7fDIEOYI63Lyc6Q
|
||||||
|
WkGQTigFffSUXWHDCO9aXNhP3ejqFWgGMtCUsrbkcJkWuWY7q5ARy/05HbSM3K4D
|
||||||
|
U9eqtnxmiV1WQ8nXuI9JgJQRvh5PTkny5LtxqzcmqvWO9TjHBbrs14BPEO9fcXxK
|
||||||
|
L/CFBbzXDSvvAgArdqqlMoncQ/yicTlfL6qzJ8EKFiqW14QMTdAn6SuuZTodXCTi
|
||||||
|
InwoT7WjjuFPKKdvfH1GP4bnqdzTnzLxCSDIEtfyfPsIX+9GI7Jkk/zZjQARAQAB
|
||||||
|
tDdTaW1vbiBTYXdpY2tpICh5dC1kbHAgc2lnbmluZyBrZXkpIDxjb250YWN0QGdy
|
||||||
|
dWI0ay54eXo+iQJOBBMBCgA4FiEErAy75oSNaoc0ZK9OV89lkztadYEFAmP78C4C
|
||||||
|
GwMFCwkIBwIGFQoJCAsCBBYCAwECHgECF4AACgkQV89lkztadYEVqQ//cW7TxhXg
|
||||||
|
7Xbh2EZQzXml0egn6j8QaV9KzGragMiShrlvTO2zXfLXqyizrFP4AspgjSn/4NrI
|
||||||
|
8mluom+Yi+qr7DXT4BjQqIM9y3AjwZPdywe912Lxcw52NNoPZCm24I9T7ySc8lmR
|
||||||
|
FQvZC0w4H/VTNj/2lgJ1dwMflpwvNRiWa5YzcFGlCUeDIPskLx9++AJE+xwU3LYm
|
||||||
|
jQQsPBqpHHiTBEJzMLl+rfd9Fg4N+QNzpFkTDW3EPerLuvJniSBBwZthqxeAtw4M
|
||||||
|
UiAXh6JvCc2hJkKCoygRfM281MeolvmsGNyQm+axlB0vyldiPP6BnaRgZlx+l6MU
|
||||||
|
cPqgHblb7RW5j9lfr6OYL7SceBIHNv0CFrt1OnkGo/tVMwcs8LH3Ae4a7UJlIceL
|
||||||
|
V54aRxSsZU7w4iX+PB79BWkEsQzwKrUuJVOeL4UDwWajp75OFaUqbS/slDDVXvK5
|
||||||
|
OIeuth3mA/adjdvgjPxhRQjA3l69rRWIJDrqBSHldmRsnX6cvXTDy8wSXZgy51lP
|
||||||
|
m4IVLHnCy9m4SaGGoAsfTZS0cC9FgjUIyTyrq9M67wOMpUxnuB0aRZgJE1DsI23E
|
||||||
|
qdvcSNVlO+39xM/KPWUEh6b83wMn88QeW+DCVGWACQq5N3YdPnAJa50617fGbY6I
|
||||||
|
gXIoRHXkDqe23PZ/jURYCv0sjVtjPoVC+bg=
|
||||||
|
=bJkn
|
||||||
|
-----END PGP PUBLIC KEY BLOCK-----
|
32
pyinst.py
32
pyinst.py
@ -37,7 +37,7 @@ def main():
|
|||||||
'--icon=devscripts/logo.ico',
|
'--icon=devscripts/logo.ico',
|
||||||
'--upx-exclude=vcruntime140.dll',
|
'--upx-exclude=vcruntime140.dll',
|
||||||
'--noconfirm',
|
'--noconfirm',
|
||||||
*dependency_options(),
|
'--additional-hooks-dir=yt_dlp/__pyinstaller',
|
||||||
*opts,
|
*opts,
|
||||||
'yt_dlp/__main__.py',
|
'yt_dlp/__main__.py',
|
||||||
]
|
]
|
||||||
@ -77,30 +77,6 @@ def version_to_list(version):
|
|||||||
return list(map(int, version_list)) + [0] * (4 - len(version_list))
|
return list(map(int, version_list)) + [0] * (4 - len(version_list))
|
||||||
|
|
||||||
|
|
||||||
def dependency_options():
|
|
||||||
# Due to the current implementation, these are auto-detected, but explicitly add them just in case
|
|
||||||
dependencies = [pycryptodome_module(), 'mutagen', 'brotli', 'certifi', 'websockets']
|
|
||||||
excluded_modules = ('youtube_dl', 'youtube_dlc', 'test', 'ytdlp_plugins', 'devscripts')
|
|
||||||
|
|
||||||
yield from (f'--hidden-import={module}' for module in dependencies)
|
|
||||||
yield '--collect-submodules=websockets'
|
|
||||||
yield from (f'--exclude-module={module}' for module in excluded_modules)
|
|
||||||
|
|
||||||
|
|
||||||
def pycryptodome_module():
|
|
||||||
try:
|
|
||||||
import Cryptodome # noqa: F401
|
|
||||||
except ImportError:
|
|
||||||
try:
|
|
||||||
import Crypto # noqa: F401
|
|
||||||
print('WARNING: Using Crypto since Cryptodome is not available. '
|
|
||||||
'Install with: pip install pycryptodomex', file=sys.stderr)
|
|
||||||
return 'Crypto'
|
|
||||||
except ImportError:
|
|
||||||
pass
|
|
||||||
return 'Cryptodome'
|
|
||||||
|
|
||||||
|
|
||||||
def set_version_info(exe, version):
|
def set_version_info(exe, version):
|
||||||
if OS_NAME == 'win32':
|
if OS_NAME == 'win32':
|
||||||
windows_set_version(exe, version)
|
windows_set_version(exe, version)
|
||||||
@ -109,7 +85,6 @@ def set_version_info(exe, version):
|
|||||||
def windows_set_version(exe, version):
|
def windows_set_version(exe, version):
|
||||||
from PyInstaller.utils.win32.versioninfo import (
|
from PyInstaller.utils.win32.versioninfo import (
|
||||||
FixedFileInfo,
|
FixedFileInfo,
|
||||||
SetVersion,
|
|
||||||
StringFileInfo,
|
StringFileInfo,
|
||||||
StringStruct,
|
StringStruct,
|
||||||
StringTable,
|
StringTable,
|
||||||
@ -118,6 +93,11 @@ def windows_set_version(exe, version):
|
|||||||
VSVersionInfo,
|
VSVersionInfo,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
from PyInstaller.utils.win32.versioninfo import SetVersion
|
||||||
|
except ImportError: # Pyinstaller >= 5.8
|
||||||
|
from PyInstaller.utils.win32.versioninfo import write_version_info_to_executable as SetVersion
|
||||||
|
|
||||||
version_list = version_to_list(version)
|
version_list = version_to_list(version)
|
||||||
suffix = MACHINE and f'_{MACHINE}'
|
suffix = MACHINE and f'_{MACHINE}'
|
||||||
SetVersion(exe, VSVersionInfo(
|
SetVersion(exe, VSVersionInfo(
|
||||||
|
@ -8,6 +8,7 @@ ignore = E402,E501,E731,E741,W503
|
|||||||
max_line_length = 120
|
max_line_length = 120
|
||||||
per_file_ignores =
|
per_file_ignores =
|
||||||
devscripts/lazy_load_template.py: F401
|
devscripts/lazy_load_template.py: F401
|
||||||
|
yt_dlp/utils/__init__.py: F401, F403
|
||||||
|
|
||||||
|
|
||||||
[autoflake]
|
[autoflake]
|
||||||
|
5
setup.py
5
setup.py
@ -92,7 +92,10 @@ def build_params():
|
|||||||
params = {'data_files': data_files}
|
params = {'data_files': data_files}
|
||||||
|
|
||||||
if setuptools_available:
|
if setuptools_available:
|
||||||
params['entry_points'] = {'console_scripts': ['yt-dlp = yt_dlp:main']}
|
params['entry_points'] = {
|
||||||
|
'console_scripts': ['yt-dlp = yt_dlp:main'],
|
||||||
|
'pyinstaller40': ['hook-dirs = yt_dlp.__pyinstaller:get_hook_dirs'],
|
||||||
|
}
|
||||||
else:
|
else:
|
||||||
params['scripts'] = ['yt-dlp']
|
params['scripts'] = ['yt-dlp']
|
||||||
return params
|
return params
|
||||||
|
File diff suppressed because it is too large
Load Diff
@ -194,8 +194,8 @@ def sanitize_got_info_dict(got_dict):
|
|||||||
'formats', 'thumbnails', 'subtitles', 'automatic_captions', 'comments', 'entries',
|
'formats', 'thumbnails', 'subtitles', 'automatic_captions', 'comments', 'entries',
|
||||||
|
|
||||||
# Auto-generated
|
# Auto-generated
|
||||||
'autonumber', 'playlist', 'format_index', 'video_ext', 'audio_ext', 'duration_string', 'epoch',
|
'autonumber', 'playlist', 'format_index', 'video_ext', 'audio_ext', 'duration_string', 'epoch', 'n_entries',
|
||||||
'fulltitle', 'extractor', 'extractor_key', 'filepath', 'infojson_filename', 'original_url', 'n_entries',
|
'fulltitle', 'extractor', 'extractor_key', 'filename', 'filepath', 'infojson_filename', 'original_url',
|
||||||
|
|
||||||
# Only live_status needs to be checked
|
# Only live_status needs to be checked
|
||||||
'is_live', 'was_live',
|
'is_live', 'was_live',
|
||||||
|
@ -69,6 +69,7 @@ def test_opengraph(self):
|
|||||||
<meta name="og:test1" content='foo > < bar'/>
|
<meta name="og:test1" content='foo > < bar'/>
|
||||||
<meta name="og:test2" content="foo >//< bar"/>
|
<meta name="og:test2" content="foo >//< bar"/>
|
||||||
<meta property=og-test3 content='Ill-formatted opengraph'/>
|
<meta property=og-test3 content='Ill-formatted opengraph'/>
|
||||||
|
<meta property=og:test4 content=unquoted-value/>
|
||||||
'''
|
'''
|
||||||
self.assertEqual(ie._og_search_title(html), 'Foo')
|
self.assertEqual(ie._og_search_title(html), 'Foo')
|
||||||
self.assertEqual(ie._og_search_description(html), 'Some video\'s description ')
|
self.assertEqual(ie._og_search_description(html), 'Some video\'s description ')
|
||||||
@ -81,6 +82,7 @@ def test_opengraph(self):
|
|||||||
self.assertEqual(ie._og_search_property(('test0', 'test1'), html), 'foo > < bar')
|
self.assertEqual(ie._og_search_property(('test0', 'test1'), html), 'foo > < bar')
|
||||||
self.assertRaises(RegexNotFoundError, ie._og_search_property, 'test0', html, None, fatal=True)
|
self.assertRaises(RegexNotFoundError, ie._og_search_property, 'test0', html, None, fatal=True)
|
||||||
self.assertRaises(RegexNotFoundError, ie._og_search_property, ('test0', 'test00'), html, None, fatal=True)
|
self.assertRaises(RegexNotFoundError, ie._og_search_property, ('test0', 'test00'), html, None, fatal=True)
|
||||||
|
self.assertEqual(ie._og_search_property('test4', html), 'unquoted-value')
|
||||||
|
|
||||||
def test_html_search_meta(self):
|
def test_html_search_meta(self):
|
||||||
ie = self.ie
|
ie = self.ie
|
||||||
@ -915,8 +917,6 @@ def test_parse_m3u8_formats(self):
|
|||||||
'acodec': 'mp4a.40.2',
|
'acodec': 'mp4a.40.2',
|
||||||
'video_ext': 'mp4',
|
'video_ext': 'mp4',
|
||||||
'audio_ext': 'none',
|
'audio_ext': 'none',
|
||||||
'vbr': 263.851,
|
|
||||||
'abr': 0,
|
|
||||||
}, {
|
}, {
|
||||||
'format_id': '577',
|
'format_id': '577',
|
||||||
'format_index': None,
|
'format_index': None,
|
||||||
@ -934,8 +934,6 @@ def test_parse_m3u8_formats(self):
|
|||||||
'acodec': 'mp4a.40.2',
|
'acodec': 'mp4a.40.2',
|
||||||
'video_ext': 'mp4',
|
'video_ext': 'mp4',
|
||||||
'audio_ext': 'none',
|
'audio_ext': 'none',
|
||||||
'vbr': 577.61,
|
|
||||||
'abr': 0,
|
|
||||||
}, {
|
}, {
|
||||||
'format_id': '915',
|
'format_id': '915',
|
||||||
'format_index': None,
|
'format_index': None,
|
||||||
@ -953,8 +951,6 @@ def test_parse_m3u8_formats(self):
|
|||||||
'acodec': 'mp4a.40.2',
|
'acodec': 'mp4a.40.2',
|
||||||
'video_ext': 'mp4',
|
'video_ext': 'mp4',
|
||||||
'audio_ext': 'none',
|
'audio_ext': 'none',
|
||||||
'vbr': 915.905,
|
|
||||||
'abr': 0,
|
|
||||||
}, {
|
}, {
|
||||||
'format_id': '1030',
|
'format_id': '1030',
|
||||||
'format_index': None,
|
'format_index': None,
|
||||||
@ -972,8 +968,6 @@ def test_parse_m3u8_formats(self):
|
|||||||
'acodec': 'mp4a.40.2',
|
'acodec': 'mp4a.40.2',
|
||||||
'video_ext': 'mp4',
|
'video_ext': 'mp4',
|
||||||
'audio_ext': 'none',
|
'audio_ext': 'none',
|
||||||
'vbr': 1030.138,
|
|
||||||
'abr': 0,
|
|
||||||
}, {
|
}, {
|
||||||
'format_id': '1924',
|
'format_id': '1924',
|
||||||
'format_index': None,
|
'format_index': None,
|
||||||
@ -991,8 +985,6 @@ def test_parse_m3u8_formats(self):
|
|||||||
'acodec': 'mp4a.40.2',
|
'acodec': 'mp4a.40.2',
|
||||||
'video_ext': 'mp4',
|
'video_ext': 'mp4',
|
||||||
'audio_ext': 'none',
|
'audio_ext': 'none',
|
||||||
'vbr': 1924.009,
|
|
||||||
'abr': 0,
|
|
||||||
}],
|
}],
|
||||||
{
|
{
|
||||||
'en': [{
|
'en': [{
|
||||||
@ -1404,6 +1396,7 @@ def test_parse_ism_formats(self):
|
|||||||
'vcodec': 'none',
|
'vcodec': 'none',
|
||||||
'acodec': 'AACL',
|
'acodec': 'AACL',
|
||||||
'protocol': 'ism',
|
'protocol': 'ism',
|
||||||
|
'audio_channels': 2,
|
||||||
'_download_params': {
|
'_download_params': {
|
||||||
'stream_type': 'audio',
|
'stream_type': 'audio',
|
||||||
'duration': 8880746666,
|
'duration': 8880746666,
|
||||||
@ -1417,9 +1410,6 @@ def test_parse_ism_formats(self):
|
|||||||
'bits_per_sample': 16,
|
'bits_per_sample': 16,
|
||||||
'nal_unit_length_field': 4
|
'nal_unit_length_field': 4
|
||||||
},
|
},
|
||||||
'audio_ext': 'isma',
|
|
||||||
'video_ext': 'none',
|
|
||||||
'abr': 128,
|
|
||||||
}, {
|
}, {
|
||||||
'format_id': 'video-100',
|
'format_id': 'video-100',
|
||||||
'url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/Manifest',
|
'url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/Manifest',
|
||||||
@ -1443,9 +1433,6 @@ def test_parse_ism_formats(self):
|
|||||||
'bits_per_sample': 16,
|
'bits_per_sample': 16,
|
||||||
'nal_unit_length_field': 4
|
'nal_unit_length_field': 4
|
||||||
},
|
},
|
||||||
'video_ext': 'ismv',
|
|
||||||
'audio_ext': 'none',
|
|
||||||
'vbr': 100,
|
|
||||||
}, {
|
}, {
|
||||||
'format_id': 'video-326',
|
'format_id': 'video-326',
|
||||||
'url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/Manifest',
|
'url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/Manifest',
|
||||||
@ -1469,9 +1456,6 @@ def test_parse_ism_formats(self):
|
|||||||
'bits_per_sample': 16,
|
'bits_per_sample': 16,
|
||||||
'nal_unit_length_field': 4
|
'nal_unit_length_field': 4
|
||||||
},
|
},
|
||||||
'video_ext': 'ismv',
|
|
||||||
'audio_ext': 'none',
|
|
||||||
'vbr': 326,
|
|
||||||
}, {
|
}, {
|
||||||
'format_id': 'video-698',
|
'format_id': 'video-698',
|
||||||
'url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/Manifest',
|
'url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/Manifest',
|
||||||
@ -1495,9 +1479,6 @@ def test_parse_ism_formats(self):
|
|||||||
'bits_per_sample': 16,
|
'bits_per_sample': 16,
|
||||||
'nal_unit_length_field': 4
|
'nal_unit_length_field': 4
|
||||||
},
|
},
|
||||||
'video_ext': 'ismv',
|
|
||||||
'audio_ext': 'none',
|
|
||||||
'vbr': 698,
|
|
||||||
}, {
|
}, {
|
||||||
'format_id': 'video-1493',
|
'format_id': 'video-1493',
|
||||||
'url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/Manifest',
|
'url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/Manifest',
|
||||||
@ -1521,9 +1502,6 @@ def test_parse_ism_formats(self):
|
|||||||
'bits_per_sample': 16,
|
'bits_per_sample': 16,
|
||||||
'nal_unit_length_field': 4
|
'nal_unit_length_field': 4
|
||||||
},
|
},
|
||||||
'video_ext': 'ismv',
|
|
||||||
'audio_ext': 'none',
|
|
||||||
'vbr': 1493,
|
|
||||||
}, {
|
}, {
|
||||||
'format_id': 'video-4482',
|
'format_id': 'video-4482',
|
||||||
'url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/Manifest',
|
'url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/Manifest',
|
||||||
@ -1547,9 +1525,6 @@ def test_parse_ism_formats(self):
|
|||||||
'bits_per_sample': 16,
|
'bits_per_sample': 16,
|
||||||
'nal_unit_length_field': 4
|
'nal_unit_length_field': 4
|
||||||
},
|
},
|
||||||
'video_ext': 'ismv',
|
|
||||||
'audio_ext': 'none',
|
|
||||||
'vbr': 4482,
|
|
||||||
}],
|
}],
|
||||||
{
|
{
|
||||||
'eng': [
|
'eng': [
|
||||||
@ -1573,34 +1548,6 @@ def test_parse_ism_formats(self):
|
|||||||
'ec-3_test',
|
'ec-3_test',
|
||||||
'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
||||||
[{
|
[{
|
||||||
'format_id': 'audio_deu_1-224',
|
|
||||||
'url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
|
||||||
'manifest_url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
|
||||||
'ext': 'isma',
|
|
||||||
'tbr': 224,
|
|
||||||
'asr': 48000,
|
|
||||||
'vcodec': 'none',
|
|
||||||
'acodec': 'EC-3',
|
|
||||||
'protocol': 'ism',
|
|
||||||
'_download_params':
|
|
||||||
{
|
|
||||||
'stream_type': 'audio',
|
|
||||||
'duration': 370000000,
|
|
||||||
'timescale': 10000000,
|
|
||||||
'width': 0,
|
|
||||||
'height': 0,
|
|
||||||
'fourcc': 'EC-3',
|
|
||||||
'language': 'deu',
|
|
||||||
'codec_private_data': '00063F000000AF87FBA7022DFB42A4D405CD93843BDD0700200F00',
|
|
||||||
'sampling_rate': 48000,
|
|
||||||
'channels': 6,
|
|
||||||
'bits_per_sample': 16,
|
|
||||||
'nal_unit_length_field': 4
|
|
||||||
},
|
|
||||||
'audio_ext': 'isma',
|
|
||||||
'video_ext': 'none',
|
|
||||||
'abr': 224,
|
|
||||||
}, {
|
|
||||||
'format_id': 'audio_deu-127',
|
'format_id': 'audio_deu-127',
|
||||||
'url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
'url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
||||||
'manifest_url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
'manifest_url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
||||||
@ -1610,8 +1557,9 @@ def test_parse_ism_formats(self):
|
|||||||
'vcodec': 'none',
|
'vcodec': 'none',
|
||||||
'acodec': 'AACL',
|
'acodec': 'AACL',
|
||||||
'protocol': 'ism',
|
'protocol': 'ism',
|
||||||
'_download_params':
|
'language': 'deu',
|
||||||
{
|
'audio_channels': 2,
|
||||||
|
'_download_params': {
|
||||||
'stream_type': 'audio',
|
'stream_type': 'audio',
|
||||||
'duration': 370000000,
|
'duration': 370000000,
|
||||||
'timescale': 10000000,
|
'timescale': 10000000,
|
||||||
@ -1625,9 +1573,32 @@ def test_parse_ism_formats(self):
|
|||||||
'bits_per_sample': 16,
|
'bits_per_sample': 16,
|
||||||
'nal_unit_length_field': 4
|
'nal_unit_length_field': 4
|
||||||
},
|
},
|
||||||
'audio_ext': 'isma',
|
}, {
|
||||||
'video_ext': 'none',
|
'format_id': 'audio_deu_1-224',
|
||||||
'abr': 127,
|
'url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
||||||
|
'manifest_url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
||||||
|
'ext': 'isma',
|
||||||
|
'tbr': 224,
|
||||||
|
'asr': 48000,
|
||||||
|
'vcodec': 'none',
|
||||||
|
'acodec': 'EC-3',
|
||||||
|
'protocol': 'ism',
|
||||||
|
'language': 'deu',
|
||||||
|
'audio_channels': 6,
|
||||||
|
'_download_params': {
|
||||||
|
'stream_type': 'audio',
|
||||||
|
'duration': 370000000,
|
||||||
|
'timescale': 10000000,
|
||||||
|
'width': 0,
|
||||||
|
'height': 0,
|
||||||
|
'fourcc': 'EC-3',
|
||||||
|
'language': 'deu',
|
||||||
|
'codec_private_data': '00063F000000AF87FBA7022DFB42A4D405CD93843BDD0700200F00',
|
||||||
|
'sampling_rate': 48000,
|
||||||
|
'channels': 6,
|
||||||
|
'bits_per_sample': 16,
|
||||||
|
'nal_unit_length_field': 4
|
||||||
|
},
|
||||||
}, {
|
}, {
|
||||||
'format_id': 'video_deu-23',
|
'format_id': 'video_deu-23',
|
||||||
'url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
'url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
||||||
@ -1639,8 +1610,8 @@ def test_parse_ism_formats(self):
|
|||||||
'vcodec': 'AVC1',
|
'vcodec': 'AVC1',
|
||||||
'acodec': 'none',
|
'acodec': 'none',
|
||||||
'protocol': 'ism',
|
'protocol': 'ism',
|
||||||
'_download_params':
|
'language': 'deu',
|
||||||
{
|
'_download_params': {
|
||||||
'stream_type': 'video',
|
'stream_type': 'video',
|
||||||
'duration': 370000000,
|
'duration': 370000000,
|
||||||
'timescale': 10000000,
|
'timescale': 10000000,
|
||||||
@ -1653,9 +1624,6 @@ def test_parse_ism_formats(self):
|
|||||||
'bits_per_sample': 16,
|
'bits_per_sample': 16,
|
||||||
'nal_unit_length_field': 4
|
'nal_unit_length_field': 4
|
||||||
},
|
},
|
||||||
'video_ext': 'ismv',
|
|
||||||
'audio_ext': 'none',
|
|
||||||
'vbr': 23,
|
|
||||||
}, {
|
}, {
|
||||||
'format_id': 'video_deu-403',
|
'format_id': 'video_deu-403',
|
||||||
'url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
'url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
||||||
@ -1667,8 +1635,8 @@ def test_parse_ism_formats(self):
|
|||||||
'vcodec': 'AVC1',
|
'vcodec': 'AVC1',
|
||||||
'acodec': 'none',
|
'acodec': 'none',
|
||||||
'protocol': 'ism',
|
'protocol': 'ism',
|
||||||
'_download_params':
|
'language': 'deu',
|
||||||
{
|
'_download_params': {
|
||||||
'stream_type': 'video',
|
'stream_type': 'video',
|
||||||
'duration': 370000000,
|
'duration': 370000000,
|
||||||
'timescale': 10000000,
|
'timescale': 10000000,
|
||||||
@ -1681,9 +1649,6 @@ def test_parse_ism_formats(self):
|
|||||||
'bits_per_sample': 16,
|
'bits_per_sample': 16,
|
||||||
'nal_unit_length_field': 4
|
'nal_unit_length_field': 4
|
||||||
},
|
},
|
||||||
'video_ext': 'ismv',
|
|
||||||
'audio_ext': 'none',
|
|
||||||
'vbr': 403,
|
|
||||||
}, {
|
}, {
|
||||||
'format_id': 'video_deu-680',
|
'format_id': 'video_deu-680',
|
||||||
'url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
'url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
||||||
@ -1695,8 +1660,8 @@ def test_parse_ism_formats(self):
|
|||||||
'vcodec': 'AVC1',
|
'vcodec': 'AVC1',
|
||||||
'acodec': 'none',
|
'acodec': 'none',
|
||||||
'protocol': 'ism',
|
'protocol': 'ism',
|
||||||
'_download_params':
|
'language': 'deu',
|
||||||
{
|
'_download_params': {
|
||||||
'stream_type': 'video',
|
'stream_type': 'video',
|
||||||
'duration': 370000000,
|
'duration': 370000000,
|
||||||
'timescale': 10000000,
|
'timescale': 10000000,
|
||||||
@ -1709,9 +1674,6 @@ def test_parse_ism_formats(self):
|
|||||||
'bits_per_sample': 16,
|
'bits_per_sample': 16,
|
||||||
'nal_unit_length_field': 4
|
'nal_unit_length_field': 4
|
||||||
},
|
},
|
||||||
'video_ext': 'ismv',
|
|
||||||
'audio_ext': 'none',
|
|
||||||
'vbr': 680,
|
|
||||||
}, {
|
}, {
|
||||||
'format_id': 'video_deu-1253',
|
'format_id': 'video_deu-1253',
|
||||||
'url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
'url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
||||||
@ -1723,8 +1685,9 @@ def test_parse_ism_formats(self):
|
|||||||
'vcodec': 'AVC1',
|
'vcodec': 'AVC1',
|
||||||
'acodec': 'none',
|
'acodec': 'none',
|
||||||
'protocol': 'ism',
|
'protocol': 'ism',
|
||||||
'_download_params':
|
'vbr': 1253,
|
||||||
{
|
'language': 'deu',
|
||||||
|
'_download_params': {
|
||||||
'stream_type': 'video',
|
'stream_type': 'video',
|
||||||
'duration': 370000000,
|
'duration': 370000000,
|
||||||
'timescale': 10000000,
|
'timescale': 10000000,
|
||||||
@ -1737,9 +1700,6 @@ def test_parse_ism_formats(self):
|
|||||||
'bits_per_sample': 16,
|
'bits_per_sample': 16,
|
||||||
'nal_unit_length_field': 4
|
'nal_unit_length_field': 4
|
||||||
},
|
},
|
||||||
'video_ext': 'ismv',
|
|
||||||
'audio_ext': 'none',
|
|
||||||
'vbr': 1253,
|
|
||||||
}, {
|
}, {
|
||||||
'format_id': 'video_deu-2121',
|
'format_id': 'video_deu-2121',
|
||||||
'url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
'url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
||||||
@ -1751,8 +1711,8 @@ def test_parse_ism_formats(self):
|
|||||||
'vcodec': 'AVC1',
|
'vcodec': 'AVC1',
|
||||||
'acodec': 'none',
|
'acodec': 'none',
|
||||||
'protocol': 'ism',
|
'protocol': 'ism',
|
||||||
'_download_params':
|
'language': 'deu',
|
||||||
{
|
'_download_params': {
|
||||||
'stream_type': 'video',
|
'stream_type': 'video',
|
||||||
'duration': 370000000,
|
'duration': 370000000,
|
||||||
'timescale': 10000000,
|
'timescale': 10000000,
|
||||||
@ -1765,9 +1725,6 @@ def test_parse_ism_formats(self):
|
|||||||
'bits_per_sample': 16,
|
'bits_per_sample': 16,
|
||||||
'nal_unit_length_field': 4
|
'nal_unit_length_field': 4
|
||||||
},
|
},
|
||||||
'video_ext': 'ismv',
|
|
||||||
'audio_ext': 'none',
|
|
||||||
'vbr': 2121,
|
|
||||||
}, {
|
}, {
|
||||||
'format_id': 'video_deu-3275',
|
'format_id': 'video_deu-3275',
|
||||||
'url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
'url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
||||||
@ -1779,8 +1736,8 @@ def test_parse_ism_formats(self):
|
|||||||
'vcodec': 'AVC1',
|
'vcodec': 'AVC1',
|
||||||
'acodec': 'none',
|
'acodec': 'none',
|
||||||
'protocol': 'ism',
|
'protocol': 'ism',
|
||||||
'_download_params':
|
'language': 'deu',
|
||||||
{
|
'_download_params': {
|
||||||
'stream_type': 'video',
|
'stream_type': 'video',
|
||||||
'duration': 370000000,
|
'duration': 370000000,
|
||||||
'timescale': 10000000,
|
'timescale': 10000000,
|
||||||
@ -1793,9 +1750,6 @@ def test_parse_ism_formats(self):
|
|||||||
'bits_per_sample': 16,
|
'bits_per_sample': 16,
|
||||||
'nal_unit_length_field': 4
|
'nal_unit_length_field': 4
|
||||||
},
|
},
|
||||||
'video_ext': 'ismv',
|
|
||||||
'audio_ext': 'none',
|
|
||||||
'vbr': 3275,
|
|
||||||
}, {
|
}, {
|
||||||
'format_id': 'video_deu-5300',
|
'format_id': 'video_deu-5300',
|
||||||
'url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
'url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
||||||
@ -1807,8 +1761,8 @@ def test_parse_ism_formats(self):
|
|||||||
'vcodec': 'AVC1',
|
'vcodec': 'AVC1',
|
||||||
'acodec': 'none',
|
'acodec': 'none',
|
||||||
'protocol': 'ism',
|
'protocol': 'ism',
|
||||||
'_download_params':
|
'language': 'deu',
|
||||||
{
|
'_download_params': {
|
||||||
'stream_type': 'video',
|
'stream_type': 'video',
|
||||||
'duration': 370000000,
|
'duration': 370000000,
|
||||||
'timescale': 10000000,
|
'timescale': 10000000,
|
||||||
@ -1821,9 +1775,6 @@ def test_parse_ism_formats(self):
|
|||||||
'bits_per_sample': 16,
|
'bits_per_sample': 16,
|
||||||
'nal_unit_length_field': 4
|
'nal_unit_length_field': 4
|
||||||
},
|
},
|
||||||
'video_ext': 'ismv',
|
|
||||||
'audio_ext': 'none',
|
|
||||||
'vbr': 5300,
|
|
||||||
}, {
|
}, {
|
||||||
'format_id': 'video_deu-8079',
|
'format_id': 'video_deu-8079',
|
||||||
'url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
'url': 'https://smstr01.dmm.t-online.de/smooth24/smoothstream_m1/streaming/sony/9221438342941275747/636887760842957027/25_km_h-Trailer-9221571562372022953_deu_20_1300k_HD_H_264_ISMV.ism/Manifest',
|
||||||
@ -1835,8 +1786,8 @@ def test_parse_ism_formats(self):
|
|||||||
'vcodec': 'AVC1',
|
'vcodec': 'AVC1',
|
||||||
'acodec': 'none',
|
'acodec': 'none',
|
||||||
'protocol': 'ism',
|
'protocol': 'ism',
|
||||||
'_download_params':
|
'language': 'deu',
|
||||||
{
|
'_download_params': {
|
||||||
'stream_type': 'video',
|
'stream_type': 'video',
|
||||||
'duration': 370000000,
|
'duration': 370000000,
|
||||||
'timescale': 10000000,
|
'timescale': 10000000,
|
||||||
@ -1849,9 +1800,6 @@ def test_parse_ism_formats(self):
|
|||||||
'bits_per_sample': 16,
|
'bits_per_sample': 16,
|
||||||
'nal_unit_length_field': 4
|
'nal_unit_length_field': 4
|
||||||
},
|
},
|
||||||
'video_ext': 'ismv',
|
|
||||||
'audio_ext': 'none',
|
|
||||||
'vbr': 8079,
|
|
||||||
}],
|
}],
|
||||||
{},
|
{},
|
||||||
),
|
),
|
||||||
|
@ -10,7 +10,6 @@
|
|||||||
|
|
||||||
import copy
|
import copy
|
||||||
import json
|
import json
|
||||||
import urllib.error
|
|
||||||
|
|
||||||
from test.helper import FakeYDL, assertRegexpMatches
|
from test.helper import FakeYDL, assertRegexpMatches
|
||||||
from yt_dlp import YoutubeDL
|
from yt_dlp import YoutubeDL
|
||||||
@ -631,6 +630,7 @@ def test_add_extra_info(self):
|
|||||||
self.assertEqual(test_dict['playlist'], 'funny videos')
|
self.assertEqual(test_dict['playlist'], 'funny videos')
|
||||||
|
|
||||||
outtmpl_info = {
|
outtmpl_info = {
|
||||||
|
'id': '1234',
|
||||||
'id': '1234',
|
'id': '1234',
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
'width': None,
|
'width': None,
|
||||||
@ -669,7 +669,7 @@ def test(tmpl, expected, *, info=None, **params):
|
|||||||
for (name, got), expect in zip((('outtmpl', out), ('filename', fname)), expected):
|
for (name, got), expect in zip((('outtmpl', out), ('filename', fname)), expected):
|
||||||
if callable(expect):
|
if callable(expect):
|
||||||
self.assertTrue(expect(got), f'Wrong {name} from {tmpl}')
|
self.assertTrue(expect(got), f'Wrong {name} from {tmpl}')
|
||||||
else:
|
elif expect is not None:
|
||||||
self.assertEqual(got, expect, f'Wrong {name} from {tmpl}')
|
self.assertEqual(got, expect, f'Wrong {name} from {tmpl}')
|
||||||
|
|
||||||
# Side-effects
|
# Side-effects
|
||||||
@ -755,20 +755,23 @@ def expect_same_infodict(out):
|
|||||||
test('%(ext)c', 'm')
|
test('%(ext)c', 'm')
|
||||||
test('%(id)d %(id)r', "1234 '1234'")
|
test('%(id)d %(id)r', "1234 '1234'")
|
||||||
test('%(id)r %(height)r', "'1234' 1080")
|
test('%(id)r %(height)r', "'1234' 1080")
|
||||||
|
test('%(title5)a %(height)a', (R"'\xe1\xe9\xed \U0001d400' 1080", None))
|
||||||
test('%(ext)s-%(ext|def)d', 'mp4-def')
|
test('%(ext)s-%(ext|def)d', 'mp4-def')
|
||||||
test('%(width|0)04d', '0000')
|
test('%(width|0)04d', '0')
|
||||||
test('a%(width|)d', 'a', outtmpl_na_placeholder='none')
|
test('a%(width|b)d', 'ab', outtmpl_na_placeholder='none')
|
||||||
|
|
||||||
FORMATS = self.outtmpl_info['formats']
|
FORMATS = self.outtmpl_info['formats']
|
||||||
sanitize = lambda x: x.replace(':', ':').replace('"', """).replace('\n', ' ')
|
|
||||||
|
|
||||||
# Custom type casting
|
# Custom type casting
|
||||||
test('%(formats.:.id)l', 'id 1, id 2, id 3')
|
test('%(formats.:.id)l', 'id 1, id 2, id 3')
|
||||||
test('%(formats.:.id)#l', ('id 1\nid 2\nid 3', 'id 1 id 2 id 3'))
|
test('%(formats.:.id)#l', ('id 1\nid 2\nid 3', 'id 1 id 2 id 3'))
|
||||||
test('%(ext)l', 'mp4')
|
test('%(ext)l', 'mp4')
|
||||||
test('%(formats.:.id) 18l', ' id 1, id 2, id 3')
|
test('%(formats.:.id) 18l', ' id 1, id 2, id 3')
|
||||||
test('%(formats)j', (json.dumps(FORMATS), sanitize(json.dumps(FORMATS))))
|
test('%(formats)j', (json.dumps(FORMATS), None))
|
||||||
test('%(formats)#j', (json.dumps(FORMATS, indent=4), sanitize(json.dumps(FORMATS, indent=4))))
|
test('%(formats)#j', (
|
||||||
|
json.dumps(FORMATS, indent=4),
|
||||||
|
json.dumps(FORMATS, indent=4).replace(':', ':').replace('"', """).replace('\n', ' ')
|
||||||
|
))
|
||||||
test('%(title5).3B', 'á')
|
test('%(title5).3B', 'á')
|
||||||
test('%(title5)U', 'áéí 𝐀')
|
test('%(title5)U', 'áéí 𝐀')
|
||||||
test('%(title5)#U', 'a\u0301e\u0301i\u0301 𝐀')
|
test('%(title5)#U', 'a\u0301e\u0301i\u0301 𝐀')
|
||||||
@ -793,8 +796,8 @@ def expect_same_infodict(out):
|
|||||||
test('%(title|%)s %(title|%%)s', '% %%')
|
test('%(title|%)s %(title|%%)s', '% %%')
|
||||||
test('%(id+1-height+3)05d', '00158')
|
test('%(id+1-height+3)05d', '00158')
|
||||||
test('%(width+100)05d', 'NA')
|
test('%(width+100)05d', 'NA')
|
||||||
test('%(formats.0) 15s', ('% 15s' % FORMATS[0], '% 15s' % sanitize(str(FORMATS[0]))))
|
test('%(formats.0) 15s', ('% 15s' % FORMATS[0], None))
|
||||||
test('%(formats.0)r', (repr(FORMATS[0]), sanitize(repr(FORMATS[0]))))
|
test('%(formats.0)r', (repr(FORMATS[0]), None))
|
||||||
test('%(height.0)03d', '001')
|
test('%(height.0)03d', '001')
|
||||||
test('%(-height.0)04d', '-001')
|
test('%(-height.0)04d', '-001')
|
||||||
test('%(formats.-1.id)s', FORMATS[-1]['id'])
|
test('%(formats.-1.id)s', FORMATS[-1]['id'])
|
||||||
@ -806,7 +809,7 @@ def expect_same_infodict(out):
|
|||||||
out = json.dumps([{'id': f['id'], 'height.:2': str(f['height'])[:2]}
|
out = json.dumps([{'id': f['id'], 'height.:2': str(f['height'])[:2]}
|
||||||
if 'height' in f else {'id': f['id']}
|
if 'height' in f else {'id': f['id']}
|
||||||
for f in FORMATS])
|
for f in FORMATS])
|
||||||
test('%(formats.:.{id,height.:2})j', (out, sanitize(out)))
|
test('%(formats.:.{id,height.:2})j', (out, None))
|
||||||
test('%(formats.:.{id,height}.id)l', ', '.join(f['id'] for f in FORMATS))
|
test('%(formats.:.{id,height}.id)l', ', '.join(f['id'] for f in FORMATS))
|
||||||
test('%(.{id,title})j', ('{"id": "1234"}', '{"id": "1234"}'))
|
test('%(.{id,title})j', ('{"id": "1234"}', '{"id": "1234"}'))
|
||||||
|
|
||||||
@ -822,6 +825,10 @@ def expect_same_infodict(out):
|
|||||||
test('%(title&foo|baz)s.bar', 'baz.bar')
|
test('%(title&foo|baz)s.bar', 'baz.bar')
|
||||||
test('%(x,id&foo|baz)s.bar', 'foo.bar')
|
test('%(x,id&foo|baz)s.bar', 'foo.bar')
|
||||||
test('%(x,title&foo|baz)s.bar', 'baz.bar')
|
test('%(x,title&foo|baz)s.bar', 'baz.bar')
|
||||||
|
test('%(id&a\nb|)s', ('a\nb', 'a b'))
|
||||||
|
test('%(id&hi {:>10} {}|)s', 'hi 1234 1234')
|
||||||
|
test(R'%(id&{0} {}|)s', 'NA')
|
||||||
|
test(R'%(id&{0.1}|)s', 'NA')
|
||||||
|
|
||||||
# Laziness
|
# Laziness
|
||||||
def gen():
|
def gen():
|
||||||
@ -867,12 +874,12 @@ def test_postprocessors(self):
|
|||||||
|
|
||||||
class SimplePP(PostProcessor):
|
class SimplePP(PostProcessor):
|
||||||
def run(self, info):
|
def run(self, info):
|
||||||
with open(audiofile, 'wt') as f:
|
with open(audiofile, 'w') as f:
|
||||||
f.write('EXAMPLE')
|
f.write('EXAMPLE')
|
||||||
return [info['filepath']], info
|
return [info['filepath']], info
|
||||||
|
|
||||||
def run_pp(params, PP):
|
def run_pp(params, PP):
|
||||||
with open(filename, 'wt') as f:
|
with open(filename, 'w') as f:
|
||||||
f.write('EXAMPLE')
|
f.write('EXAMPLE')
|
||||||
ydl = YoutubeDL(params)
|
ydl = YoutubeDL(params)
|
||||||
ydl.add_post_processor(PP())
|
ydl.add_post_processor(PP())
|
||||||
@ -891,7 +898,7 @@ def run_pp(params, PP):
|
|||||||
|
|
||||||
class ModifierPP(PostProcessor):
|
class ModifierPP(PostProcessor):
|
||||||
def run(self, info):
|
def run(self, info):
|
||||||
with open(info['filepath'], 'wt') as f:
|
with open(info['filepath'], 'w') as f:
|
||||||
f.write('MODIFIED')
|
f.write('MODIFIED')
|
||||||
return [], info
|
return [], info
|
||||||
|
|
||||||
@ -1093,11 +1100,6 @@ def test_selection(params, expected_ids, evaluate_all=False):
|
|||||||
test_selection({'playlist_items': '-15::2'}, INDICES[1::2], True)
|
test_selection({'playlist_items': '-15::2'}, INDICES[1::2], True)
|
||||||
test_selection({'playlist_items': '-15::15'}, [], True)
|
test_selection({'playlist_items': '-15::15'}, [], True)
|
||||||
|
|
||||||
def test_urlopen_no_file_protocol(self):
|
|
||||||
# see https://github.com/ytdl-org/youtube-dl/issues/8227
|
|
||||||
ydl = YDL()
|
|
||||||
self.assertRaises(urllib.error.URLError, ydl.urlopen, 'file:///etc/passwd')
|
|
||||||
|
|
||||||
def test_do_not_override_ie_key_in_url_transparent(self):
|
def test_do_not_override_ie_key_in_url_transparent(self):
|
||||||
ydl = YDL()
|
ydl = YDL()
|
||||||
|
|
||||||
|
@ -11,7 +11,7 @@
|
|||||||
import re
|
import re
|
||||||
import tempfile
|
import tempfile
|
||||||
|
|
||||||
from yt_dlp.utils import YoutubeDLCookieJar
|
from yt_dlp.cookies import YoutubeDLCookieJar
|
||||||
|
|
||||||
|
|
||||||
class TestYoutubeDLCookieJar(unittest.TestCase):
|
class TestYoutubeDLCookieJar(unittest.TestCase):
|
||||||
@ -47,6 +47,12 @@ def test_malformed_cookies(self):
|
|||||||
# will be ignored
|
# will be ignored
|
||||||
self.assertFalse(cookiejar._cookies)
|
self.assertFalse(cookiejar._cookies)
|
||||||
|
|
||||||
|
def test_get_cookie_header(self):
|
||||||
|
cookiejar = YoutubeDLCookieJar('./test/testdata/cookies/httponly_cookies.txt')
|
||||||
|
cookiejar.load(ignore_discard=True, ignore_expires=True)
|
||||||
|
header = cookiejar.get_cookie_header('https://www.foobar.foobar')
|
||||||
|
self.assertIn('HTTPONLY_COOKIE', header)
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
unittest.main()
|
unittest.main()
|
||||||
|
@ -26,7 +26,7 @@
|
|||||||
key_expansion,
|
key_expansion,
|
||||||
pad_block,
|
pad_block,
|
||||||
)
|
)
|
||||||
from yt_dlp.dependencies import Cryptodome_AES
|
from yt_dlp.dependencies import Cryptodome
|
||||||
from yt_dlp.utils import bytes_to_intlist, intlist_to_bytes
|
from yt_dlp.utils import bytes_to_intlist, intlist_to_bytes
|
||||||
|
|
||||||
# the encrypted data can be generate with 'devscripts/generate_aes_testdata.py'
|
# the encrypted data can be generate with 'devscripts/generate_aes_testdata.py'
|
||||||
@ -48,7 +48,7 @@ def test_cbc_decrypt(self):
|
|||||||
data = b'\x97\x92+\xe5\x0b\xc3\x18\x91ky9m&\xb3\xb5@\xe6\x27\xc2\x96.\xc8u\x88\xab9-[\x9e|\xf1\xcd'
|
data = b'\x97\x92+\xe5\x0b\xc3\x18\x91ky9m&\xb3\xb5@\xe6\x27\xc2\x96.\xc8u\x88\xab9-[\x9e|\xf1\xcd'
|
||||||
decrypted = intlist_to_bytes(aes_cbc_decrypt(bytes_to_intlist(data), self.key, self.iv))
|
decrypted = intlist_to_bytes(aes_cbc_decrypt(bytes_to_intlist(data), self.key, self.iv))
|
||||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
||||||
if Cryptodome_AES:
|
if Cryptodome.AES:
|
||||||
decrypted = aes_cbc_decrypt_bytes(data, intlist_to_bytes(self.key), intlist_to_bytes(self.iv))
|
decrypted = aes_cbc_decrypt_bytes(data, intlist_to_bytes(self.key), intlist_to_bytes(self.iv))
|
||||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
||||||
|
|
||||||
@ -78,7 +78,7 @@ def test_gcm_decrypt(self):
|
|||||||
decrypted = intlist_to_bytes(aes_gcm_decrypt_and_verify(
|
decrypted = intlist_to_bytes(aes_gcm_decrypt_and_verify(
|
||||||
bytes_to_intlist(data), self.key, bytes_to_intlist(authentication_tag), self.iv[:12]))
|
bytes_to_intlist(data), self.key, bytes_to_intlist(authentication_tag), self.iv[:12]))
|
||||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
||||||
if Cryptodome_AES:
|
if Cryptodome.AES:
|
||||||
decrypted = aes_gcm_decrypt_and_verify_bytes(
|
decrypted = aes_gcm_decrypt_and_verify_bytes(
|
||||||
data, intlist_to_bytes(self.key), authentication_tag, intlist_to_bytes(self.iv[:12]))
|
data, intlist_to_bytes(self.key), authentication_tag, intlist_to_bytes(self.iv[:12]))
|
||||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
||||||
|
@ -10,6 +10,7 @@
|
|||||||
|
|
||||||
from test.helper import is_download_test, try_rm
|
from test.helper import is_download_test, try_rm
|
||||||
from yt_dlp import YoutubeDL
|
from yt_dlp import YoutubeDL
|
||||||
|
from yt_dlp.utils import DownloadError
|
||||||
|
|
||||||
|
|
||||||
def _download_restricted(url, filename, age):
|
def _download_restricted(url, filename, age):
|
||||||
@ -25,10 +26,14 @@ def _download_restricted(url, filename, age):
|
|||||||
ydl.add_default_info_extractors()
|
ydl.add_default_info_extractors()
|
||||||
json_filename = os.path.splitext(filename)[0] + '.info.json'
|
json_filename = os.path.splitext(filename)[0] + '.info.json'
|
||||||
try_rm(json_filename)
|
try_rm(json_filename)
|
||||||
ydl.download([url])
|
try:
|
||||||
res = os.path.exists(json_filename)
|
ydl.download([url])
|
||||||
try_rm(json_filename)
|
except DownloadError:
|
||||||
return res
|
pass
|
||||||
|
else:
|
||||||
|
return os.path.exists(json_filename)
|
||||||
|
finally:
|
||||||
|
try_rm(json_filename)
|
||||||
|
|
||||||
|
|
||||||
@is_download_test
|
@is_download_test
|
||||||
@ -38,12 +43,12 @@ def _assert_restricted(self, url, filename, age, old_age=None):
|
|||||||
self.assertFalse(_download_restricted(url, filename, age))
|
self.assertFalse(_download_restricted(url, filename, age))
|
||||||
|
|
||||||
def test_youtube(self):
|
def test_youtube(self):
|
||||||
self._assert_restricted('07FYdnEawAQ', '07FYdnEawAQ.mp4', 10)
|
self._assert_restricted('HtVdAasjOgU', 'HtVdAasjOgU.mp4', 10)
|
||||||
|
|
||||||
def test_youporn(self):
|
def test_youporn(self):
|
||||||
self._assert_restricted(
|
self._assert_restricted(
|
||||||
'http://www.youporn.com/watch/505835/sex-ed-is-it-safe-to-masturbate-daily/',
|
'https://www.youporn.com/watch/16715086/sex-ed-in-detention-18-asmr/',
|
||||||
'505835.mp4', 2, old_age=25)
|
'16715086.mp4', 2, old_age=25)
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
|
@ -31,6 +31,9 @@ def test_compat_passthrough(self):
|
|||||||
# TODO: Test submodule
|
# TODO: Test submodule
|
||||||
# compat.asyncio.events # Must not raise error
|
# compat.asyncio.events # Must not raise error
|
||||||
|
|
||||||
|
with self.assertWarns(DeprecationWarning):
|
||||||
|
compat.compat_pycrypto_AES # Must not raise error
|
||||||
|
|
||||||
def test_compat_expanduser(self):
|
def test_compat_expanduser(self):
|
||||||
old_home = os.environ.get('HOME')
|
old_home = os.environ.get('HOME')
|
||||||
test_str = R'C:\Documents and Settings\тест\Application Data'
|
test_str = R'C:\Documents and Settings\тест\Application Data'
|
||||||
|
@ -49,32 +49,38 @@ def test_get_desktop_environment(self):
|
|||||||
""" based on https://chromium.googlesource.com/chromium/src/+/refs/heads/main/base/nix/xdg_util_unittest.cc """
|
""" based on https://chromium.googlesource.com/chromium/src/+/refs/heads/main/base/nix/xdg_util_unittest.cc """
|
||||||
test_cases = [
|
test_cases = [
|
||||||
({}, _LinuxDesktopEnvironment.OTHER),
|
({}, _LinuxDesktopEnvironment.OTHER),
|
||||||
|
({'DESKTOP_SESSION': 'my_custom_de'}, _LinuxDesktopEnvironment.OTHER),
|
||||||
|
({'XDG_CURRENT_DESKTOP': 'my_custom_de'}, _LinuxDesktopEnvironment.OTHER),
|
||||||
|
|
||||||
({'DESKTOP_SESSION': 'gnome'}, _LinuxDesktopEnvironment.GNOME),
|
({'DESKTOP_SESSION': 'gnome'}, _LinuxDesktopEnvironment.GNOME),
|
||||||
({'DESKTOP_SESSION': 'mate'}, _LinuxDesktopEnvironment.GNOME),
|
({'DESKTOP_SESSION': 'mate'}, _LinuxDesktopEnvironment.GNOME),
|
||||||
({'DESKTOP_SESSION': 'kde4'}, _LinuxDesktopEnvironment.KDE),
|
({'DESKTOP_SESSION': 'kde4'}, _LinuxDesktopEnvironment.KDE4),
|
||||||
({'DESKTOP_SESSION': 'kde'}, _LinuxDesktopEnvironment.KDE),
|
({'DESKTOP_SESSION': 'kde'}, _LinuxDesktopEnvironment.KDE3),
|
||||||
({'DESKTOP_SESSION': 'xfce'}, _LinuxDesktopEnvironment.XFCE),
|
({'DESKTOP_SESSION': 'xfce'}, _LinuxDesktopEnvironment.XFCE),
|
||||||
|
|
||||||
({'GNOME_DESKTOP_SESSION_ID': 1}, _LinuxDesktopEnvironment.GNOME),
|
({'GNOME_DESKTOP_SESSION_ID': 1}, _LinuxDesktopEnvironment.GNOME),
|
||||||
({'KDE_FULL_SESSION': 1}, _LinuxDesktopEnvironment.KDE),
|
({'KDE_FULL_SESSION': 1}, _LinuxDesktopEnvironment.KDE3),
|
||||||
|
({'KDE_FULL_SESSION': 1, 'DESKTOP_SESSION': 'kde4'}, _LinuxDesktopEnvironment.KDE4),
|
||||||
|
|
||||||
({'XDG_CURRENT_DESKTOP': 'X-Cinnamon'}, _LinuxDesktopEnvironment.CINNAMON),
|
({'XDG_CURRENT_DESKTOP': 'X-Cinnamon'}, _LinuxDesktopEnvironment.CINNAMON),
|
||||||
|
({'XDG_CURRENT_DESKTOP': 'Deepin'}, _LinuxDesktopEnvironment.DEEPIN),
|
||||||
({'XDG_CURRENT_DESKTOP': 'GNOME'}, _LinuxDesktopEnvironment.GNOME),
|
({'XDG_CURRENT_DESKTOP': 'GNOME'}, _LinuxDesktopEnvironment.GNOME),
|
||||||
({'XDG_CURRENT_DESKTOP': 'GNOME:GNOME-Classic'}, _LinuxDesktopEnvironment.GNOME),
|
({'XDG_CURRENT_DESKTOP': 'GNOME:GNOME-Classic'}, _LinuxDesktopEnvironment.GNOME),
|
||||||
({'XDG_CURRENT_DESKTOP': 'GNOME : GNOME-Classic'}, _LinuxDesktopEnvironment.GNOME),
|
({'XDG_CURRENT_DESKTOP': 'GNOME : GNOME-Classic'}, _LinuxDesktopEnvironment.GNOME),
|
||||||
|
|
||||||
({'XDG_CURRENT_DESKTOP': 'Unity', 'DESKTOP_SESSION': 'gnome-fallback'}, _LinuxDesktopEnvironment.GNOME),
|
({'XDG_CURRENT_DESKTOP': 'Unity', 'DESKTOP_SESSION': 'gnome-fallback'}, _LinuxDesktopEnvironment.GNOME),
|
||||||
({'XDG_CURRENT_DESKTOP': 'KDE', 'KDE_SESSION_VERSION': '5'}, _LinuxDesktopEnvironment.KDE),
|
({'XDG_CURRENT_DESKTOP': 'KDE', 'KDE_SESSION_VERSION': '5'}, _LinuxDesktopEnvironment.KDE5),
|
||||||
({'XDG_CURRENT_DESKTOP': 'KDE'}, _LinuxDesktopEnvironment.KDE),
|
({'XDG_CURRENT_DESKTOP': 'KDE', 'KDE_SESSION_VERSION': '6'}, _LinuxDesktopEnvironment.KDE6),
|
||||||
|
({'XDG_CURRENT_DESKTOP': 'KDE'}, _LinuxDesktopEnvironment.KDE4),
|
||||||
({'XDG_CURRENT_DESKTOP': 'Pantheon'}, _LinuxDesktopEnvironment.PANTHEON),
|
({'XDG_CURRENT_DESKTOP': 'Pantheon'}, _LinuxDesktopEnvironment.PANTHEON),
|
||||||
|
({'XDG_CURRENT_DESKTOP': 'UKUI'}, _LinuxDesktopEnvironment.UKUI),
|
||||||
({'XDG_CURRENT_DESKTOP': 'Unity'}, _LinuxDesktopEnvironment.UNITY),
|
({'XDG_CURRENT_DESKTOP': 'Unity'}, _LinuxDesktopEnvironment.UNITY),
|
||||||
({'XDG_CURRENT_DESKTOP': 'Unity:Unity7'}, _LinuxDesktopEnvironment.UNITY),
|
({'XDG_CURRENT_DESKTOP': 'Unity:Unity7'}, _LinuxDesktopEnvironment.UNITY),
|
||||||
({'XDG_CURRENT_DESKTOP': 'Unity:Unity8'}, _LinuxDesktopEnvironment.UNITY),
|
({'XDG_CURRENT_DESKTOP': 'Unity:Unity8'}, _LinuxDesktopEnvironment.UNITY),
|
||||||
]
|
]
|
||||||
|
|
||||||
for env, expected_desktop_environment in test_cases:
|
for env, expected_desktop_environment in test_cases:
|
||||||
self.assertEqual(_get_linux_desktop_environment(env), expected_desktop_environment)
|
self.assertEqual(_get_linux_desktop_environment(env, Logger()), expected_desktop_environment)
|
||||||
|
|
||||||
def test_chrome_cookie_decryptor_linux_derive_key(self):
|
def test_chrome_cookie_decryptor_linux_derive_key(self):
|
||||||
key = LinuxChromeCookieDecryptor.derive_key(b'abc')
|
key = LinuxChromeCookieDecryptor.derive_key(b'abc')
|
||||||
|
@ -7,40 +7,190 @@
|
|||||||
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||||
|
|
||||||
|
import gzip
|
||||||
|
import http.cookiejar
|
||||||
import http.server
|
import http.server
|
||||||
|
import io
|
||||||
|
import pathlib
|
||||||
import ssl
|
import ssl
|
||||||
|
import tempfile
|
||||||
import threading
|
import threading
|
||||||
|
import urllib.error
|
||||||
import urllib.request
|
import urllib.request
|
||||||
|
import zlib
|
||||||
|
|
||||||
from test.helper import http_server_port
|
from test.helper import http_server_port
|
||||||
from yt_dlp import YoutubeDL
|
from yt_dlp import YoutubeDL
|
||||||
|
from yt_dlp.dependencies import brotli
|
||||||
|
from yt_dlp.utils import sanitized_Request, urlencode_postdata
|
||||||
|
|
||||||
|
from .helper import FakeYDL
|
||||||
|
|
||||||
TEST_DIR = os.path.dirname(os.path.abspath(__file__))
|
TEST_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||||
|
|
||||||
|
|
||||||
class HTTPTestRequestHandler(http.server.BaseHTTPRequestHandler):
|
class HTTPTestRequestHandler(http.server.BaseHTTPRequestHandler):
|
||||||
|
protocol_version = 'HTTP/1.1'
|
||||||
|
|
||||||
def log_message(self, format, *args):
|
def log_message(self, format, *args):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
def _headers(self):
|
||||||
|
payload = str(self.headers).encode('utf-8')
|
||||||
|
self.send_response(200)
|
||||||
|
self.send_header('Content-Type', 'application/json')
|
||||||
|
self.send_header('Content-Length', str(len(payload)))
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(payload)
|
||||||
|
|
||||||
|
def _redirect(self):
|
||||||
|
self.send_response(int(self.path[len('/redirect_'):]))
|
||||||
|
self.send_header('Location', '/method')
|
||||||
|
self.send_header('Content-Length', '0')
|
||||||
|
self.end_headers()
|
||||||
|
|
||||||
|
def _method(self, method, payload=None):
|
||||||
|
self.send_response(200)
|
||||||
|
self.send_header('Content-Length', str(len(payload or '')))
|
||||||
|
self.send_header('Method', method)
|
||||||
|
self.end_headers()
|
||||||
|
if payload:
|
||||||
|
self.wfile.write(payload)
|
||||||
|
|
||||||
|
def _status(self, status):
|
||||||
|
payload = f'<html>{status} NOT FOUND</html>'.encode()
|
||||||
|
self.send_response(int(status))
|
||||||
|
self.send_header('Content-Type', 'text/html; charset=utf-8')
|
||||||
|
self.send_header('Content-Length', str(len(payload)))
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(payload)
|
||||||
|
|
||||||
|
def _read_data(self):
|
||||||
|
if 'Content-Length' in self.headers:
|
||||||
|
return self.rfile.read(int(self.headers['Content-Length']))
|
||||||
|
|
||||||
|
def do_POST(self):
|
||||||
|
data = self._read_data()
|
||||||
|
if self.path.startswith('/redirect_'):
|
||||||
|
self._redirect()
|
||||||
|
elif self.path.startswith('/method'):
|
||||||
|
self._method('POST', data)
|
||||||
|
elif self.path.startswith('/headers'):
|
||||||
|
self._headers()
|
||||||
|
else:
|
||||||
|
self._status(404)
|
||||||
|
|
||||||
|
def do_HEAD(self):
|
||||||
|
if self.path.startswith('/redirect_'):
|
||||||
|
self._redirect()
|
||||||
|
elif self.path.startswith('/method'):
|
||||||
|
self._method('HEAD')
|
||||||
|
else:
|
||||||
|
self._status(404)
|
||||||
|
|
||||||
|
def do_PUT(self):
|
||||||
|
data = self._read_data()
|
||||||
|
if self.path.startswith('/redirect_'):
|
||||||
|
self._redirect()
|
||||||
|
elif self.path.startswith('/method'):
|
||||||
|
self._method('PUT', data)
|
||||||
|
else:
|
||||||
|
self._status(404)
|
||||||
|
|
||||||
def do_GET(self):
|
def do_GET(self):
|
||||||
if self.path == '/video.html':
|
if self.path == '/video.html':
|
||||||
|
payload = b'<html><video src="/vid.mp4" /></html>'
|
||||||
self.send_response(200)
|
self.send_response(200)
|
||||||
self.send_header('Content-Type', 'text/html; charset=utf-8')
|
self.send_header('Content-Type', 'text/html; charset=utf-8')
|
||||||
|
self.send_header('Content-Length', str(len(payload))) # required for persistent connections
|
||||||
self.end_headers()
|
self.end_headers()
|
||||||
self.wfile.write(b'<html><video src="/vid.mp4" /></html>')
|
self.wfile.write(payload)
|
||||||
elif self.path == '/vid.mp4':
|
elif self.path == '/vid.mp4':
|
||||||
|
payload = b'\x00\x00\x00\x00\x20\x66\x74[video]'
|
||||||
self.send_response(200)
|
self.send_response(200)
|
||||||
self.send_header('Content-Type', 'video/mp4')
|
self.send_header('Content-Type', 'video/mp4')
|
||||||
|
self.send_header('Content-Length', str(len(payload)))
|
||||||
self.end_headers()
|
self.end_headers()
|
||||||
self.wfile.write(b'\x00\x00\x00\x00\x20\x66\x74[video]')
|
self.wfile.write(payload)
|
||||||
elif self.path == '/%E4%B8%AD%E6%96%87.html':
|
elif self.path == '/%E4%B8%AD%E6%96%87.html':
|
||||||
|
payload = b'<html><video src="/vid.mp4" /></html>'
|
||||||
self.send_response(200)
|
self.send_response(200)
|
||||||
self.send_header('Content-Type', 'text/html; charset=utf-8')
|
self.send_header('Content-Type', 'text/html; charset=utf-8')
|
||||||
|
self.send_header('Content-Length', str(len(payload)))
|
||||||
self.end_headers()
|
self.end_headers()
|
||||||
self.wfile.write(b'<html><video src="/vid.mp4" /></html>')
|
self.wfile.write(payload)
|
||||||
|
elif self.path == '/%c7%9f':
|
||||||
|
payload = b'<html><video src="/vid.mp4" /></html>'
|
||||||
|
self.send_response(200)
|
||||||
|
self.send_header('Content-Type', 'text/html; charset=utf-8')
|
||||||
|
self.send_header('Content-Length', str(len(payload)))
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(payload)
|
||||||
|
elif self.path.startswith('/redirect_'):
|
||||||
|
self._redirect()
|
||||||
|
elif self.path.startswith('/method'):
|
||||||
|
self._method('GET')
|
||||||
|
elif self.path.startswith('/headers'):
|
||||||
|
self._headers()
|
||||||
|
elif self.path == '/trailing_garbage':
|
||||||
|
payload = b'<html><video src="/vid.mp4" /></html>'
|
||||||
|
self.send_response(200)
|
||||||
|
self.send_header('Content-Type', 'text/html; charset=utf-8')
|
||||||
|
self.send_header('Content-Encoding', 'gzip')
|
||||||
|
buf = io.BytesIO()
|
||||||
|
with gzip.GzipFile(fileobj=buf, mode='wb') as f:
|
||||||
|
f.write(payload)
|
||||||
|
compressed = buf.getvalue() + b'trailing garbage'
|
||||||
|
self.send_header('Content-Length', str(len(compressed)))
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(compressed)
|
||||||
|
elif self.path == '/302-non-ascii-redirect':
|
||||||
|
new_url = f'http://127.0.0.1:{http_server_port(self.server)}/中文.html'
|
||||||
|
self.send_response(301)
|
||||||
|
self.send_header('Location', new_url)
|
||||||
|
self.send_header('Content-Length', '0')
|
||||||
|
self.end_headers()
|
||||||
|
elif self.path == '/content-encoding':
|
||||||
|
encodings = self.headers.get('ytdl-encoding', '')
|
||||||
|
payload = b'<html><video src="/vid.mp4" /></html>'
|
||||||
|
for encoding in filter(None, (e.strip() for e in encodings.split(','))):
|
||||||
|
if encoding == 'br' and brotli:
|
||||||
|
payload = brotli.compress(payload)
|
||||||
|
elif encoding == 'gzip':
|
||||||
|
buf = io.BytesIO()
|
||||||
|
with gzip.GzipFile(fileobj=buf, mode='wb') as f:
|
||||||
|
f.write(payload)
|
||||||
|
payload = buf.getvalue()
|
||||||
|
elif encoding == 'deflate':
|
||||||
|
payload = zlib.compress(payload)
|
||||||
|
elif encoding == 'unsupported':
|
||||||
|
payload = b'raw'
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
self._status(415)
|
||||||
|
return
|
||||||
|
self.send_response(200)
|
||||||
|
self.send_header('Content-Encoding', encodings)
|
||||||
|
self.send_header('Content-Length', str(len(payload)))
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(payload)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
assert False
|
self._status(404)
|
||||||
|
|
||||||
|
def send_header(self, keyword, value):
|
||||||
|
"""
|
||||||
|
Forcibly allow HTTP server to send non percent-encoded non-ASCII characters in headers.
|
||||||
|
This is against what is defined in RFC 3986, however we need to test we support this
|
||||||
|
since some sites incorrectly do this.
|
||||||
|
"""
|
||||||
|
if keyword.lower() == 'connection':
|
||||||
|
return super().send_header(keyword, value)
|
||||||
|
|
||||||
|
if not hasattr(self, '_headers_buffer'):
|
||||||
|
self._headers_buffer = []
|
||||||
|
|
||||||
|
self._headers_buffer.append(f'{keyword}: {value}\r\n'.encode())
|
||||||
|
|
||||||
|
|
||||||
class FakeLogger:
|
class FakeLogger:
|
||||||
@ -56,36 +206,177 @@ def error(self, msg):
|
|||||||
|
|
||||||
class TestHTTP(unittest.TestCase):
|
class TestHTTP(unittest.TestCase):
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
self.httpd = http.server.HTTPServer(
|
# HTTP server
|
||||||
|
self.http_httpd = http.server.ThreadingHTTPServer(
|
||||||
('127.0.0.1', 0), HTTPTestRequestHandler)
|
('127.0.0.1', 0), HTTPTestRequestHandler)
|
||||||
self.port = http_server_port(self.httpd)
|
self.http_port = http_server_port(self.http_httpd)
|
||||||
self.server_thread = threading.Thread(target=self.httpd.serve_forever)
|
self.http_server_thread = threading.Thread(target=self.http_httpd.serve_forever)
|
||||||
self.server_thread.daemon = True
|
# FIXME: we should probably stop the http server thread after each test
|
||||||
self.server_thread.start()
|
# See: https://github.com/yt-dlp/yt-dlp/pull/7094#discussion_r1199746041
|
||||||
|
self.http_server_thread.daemon = True
|
||||||
|
self.http_server_thread.start()
|
||||||
|
|
||||||
|
# HTTPS server
|
||||||
class TestHTTPS(unittest.TestCase):
|
|
||||||
def setUp(self):
|
|
||||||
certfn = os.path.join(TEST_DIR, 'testcert.pem')
|
certfn = os.path.join(TEST_DIR, 'testcert.pem')
|
||||||
self.httpd = http.server.HTTPServer(
|
self.https_httpd = http.server.ThreadingHTTPServer(
|
||||||
('127.0.0.1', 0), HTTPTestRequestHandler)
|
('127.0.0.1', 0), HTTPTestRequestHandler)
|
||||||
sslctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
|
sslctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
|
||||||
sslctx.load_cert_chain(certfn, None)
|
sslctx.load_cert_chain(certfn, None)
|
||||||
self.httpd.socket = sslctx.wrap_socket(self.httpd.socket, server_side=True)
|
self.https_httpd.socket = sslctx.wrap_socket(self.https_httpd.socket, server_side=True)
|
||||||
self.port = http_server_port(self.httpd)
|
self.https_port = http_server_port(self.https_httpd)
|
||||||
self.server_thread = threading.Thread(target=self.httpd.serve_forever)
|
self.https_server_thread = threading.Thread(target=self.https_httpd.serve_forever)
|
||||||
self.server_thread.daemon = True
|
self.https_server_thread.daemon = True
|
||||||
self.server_thread.start()
|
self.https_server_thread.start()
|
||||||
|
|
||||||
def test_nocheckcertificate(self):
|
def test_nocheckcertificate(self):
|
||||||
ydl = YoutubeDL({'logger': FakeLogger()})
|
with FakeYDL({'logger': FakeLogger()}) as ydl:
|
||||||
self.assertRaises(
|
with self.assertRaises(urllib.error.URLError):
|
||||||
Exception,
|
ydl.urlopen(sanitized_Request(f'https://127.0.0.1:{self.https_port}/headers'))
|
||||||
ydl.extract_info, 'https://127.0.0.1:%d/video.html' % self.port)
|
|
||||||
|
|
||||||
ydl = YoutubeDL({'logger': FakeLogger(), 'nocheckcertificate': True})
|
with FakeYDL({'logger': FakeLogger(), 'nocheckcertificate': True}) as ydl:
|
||||||
r = ydl.extract_info('https://127.0.0.1:%d/video.html' % self.port)
|
r = ydl.urlopen(sanitized_Request(f'https://127.0.0.1:{self.https_port}/headers'))
|
||||||
self.assertEqual(r['url'], 'https://127.0.0.1:%d/vid.mp4' % self.port)
|
self.assertEqual(r.status, 200)
|
||||||
|
r.close()
|
||||||
|
|
||||||
|
def test_percent_encode(self):
|
||||||
|
with FakeYDL() as ydl:
|
||||||
|
# Unicode characters should be encoded with uppercase percent-encoding
|
||||||
|
res = ydl.urlopen(sanitized_Request(f'http://127.0.0.1:{self.http_port}/中文.html'))
|
||||||
|
self.assertEqual(res.status, 200)
|
||||||
|
res.close()
|
||||||
|
# don't normalize existing percent encodings
|
||||||
|
res = ydl.urlopen(sanitized_Request(f'http://127.0.0.1:{self.http_port}/%c7%9f'))
|
||||||
|
self.assertEqual(res.status, 200)
|
||||||
|
res.close()
|
||||||
|
|
||||||
|
def test_unicode_path_redirection(self):
|
||||||
|
with FakeYDL() as ydl:
|
||||||
|
r = ydl.urlopen(sanitized_Request(f'http://127.0.0.1:{self.http_port}/302-non-ascii-redirect'))
|
||||||
|
self.assertEqual(r.url, f'http://127.0.0.1:{self.http_port}/%E4%B8%AD%E6%96%87.html')
|
||||||
|
r.close()
|
||||||
|
|
||||||
|
def test_redirect(self):
|
||||||
|
with FakeYDL() as ydl:
|
||||||
|
def do_req(redirect_status, method):
|
||||||
|
data = b'testdata' if method in ('POST', 'PUT') else None
|
||||||
|
res = ydl.urlopen(sanitized_Request(
|
||||||
|
f'http://127.0.0.1:{self.http_port}/redirect_{redirect_status}', method=method, data=data))
|
||||||
|
return res.read().decode('utf-8'), res.headers.get('method', '')
|
||||||
|
|
||||||
|
# A 303 must either use GET or HEAD for subsequent request
|
||||||
|
self.assertEqual(do_req(303, 'POST'), ('', 'GET'))
|
||||||
|
self.assertEqual(do_req(303, 'HEAD'), ('', 'HEAD'))
|
||||||
|
|
||||||
|
self.assertEqual(do_req(303, 'PUT'), ('', 'GET'))
|
||||||
|
|
||||||
|
# 301 and 302 turn POST only into a GET
|
||||||
|
self.assertEqual(do_req(301, 'POST'), ('', 'GET'))
|
||||||
|
self.assertEqual(do_req(301, 'HEAD'), ('', 'HEAD'))
|
||||||
|
self.assertEqual(do_req(302, 'POST'), ('', 'GET'))
|
||||||
|
self.assertEqual(do_req(302, 'HEAD'), ('', 'HEAD'))
|
||||||
|
|
||||||
|
self.assertEqual(do_req(301, 'PUT'), ('testdata', 'PUT'))
|
||||||
|
self.assertEqual(do_req(302, 'PUT'), ('testdata', 'PUT'))
|
||||||
|
|
||||||
|
# 307 and 308 should not change method
|
||||||
|
for m in ('POST', 'PUT'):
|
||||||
|
self.assertEqual(do_req(307, m), ('testdata', m))
|
||||||
|
self.assertEqual(do_req(308, m), ('testdata', m))
|
||||||
|
|
||||||
|
self.assertEqual(do_req(307, 'HEAD'), ('', 'HEAD'))
|
||||||
|
self.assertEqual(do_req(308, 'HEAD'), ('', 'HEAD'))
|
||||||
|
|
||||||
|
# These should not redirect and instead raise an HTTPError
|
||||||
|
for code in (300, 304, 305, 306):
|
||||||
|
with self.assertRaises(urllib.error.HTTPError):
|
||||||
|
do_req(code, 'GET')
|
||||||
|
|
||||||
|
def test_content_type(self):
|
||||||
|
# https://github.com/yt-dlp/yt-dlp/commit/379a4f161d4ad3e40932dcf5aca6e6fb9715ab28
|
||||||
|
with FakeYDL({'nocheckcertificate': True}) as ydl:
|
||||||
|
# method should be auto-detected as POST
|
||||||
|
r = sanitized_Request(f'https://localhost:{self.https_port}/headers', data=urlencode_postdata({'test': 'test'}))
|
||||||
|
|
||||||
|
headers = ydl.urlopen(r).read().decode('utf-8')
|
||||||
|
self.assertIn('Content-Type: application/x-www-form-urlencoded', headers)
|
||||||
|
|
||||||
|
# test http
|
||||||
|
r = sanitized_Request(f'http://localhost:{self.http_port}/headers', data=urlencode_postdata({'test': 'test'}))
|
||||||
|
headers = ydl.urlopen(r).read().decode('utf-8')
|
||||||
|
self.assertIn('Content-Type: application/x-www-form-urlencoded', headers)
|
||||||
|
|
||||||
|
def test_cookiejar(self):
|
||||||
|
with FakeYDL() as ydl:
|
||||||
|
ydl.cookiejar.set_cookie(http.cookiejar.Cookie(
|
||||||
|
0, 'test', 'ytdlp', None, False, '127.0.0.1', True,
|
||||||
|
False, '/headers', True, False, None, False, None, None, {}))
|
||||||
|
data = ydl.urlopen(sanitized_Request(f'http://127.0.0.1:{self.http_port}/headers')).read()
|
||||||
|
self.assertIn(b'Cookie: test=ytdlp', data)
|
||||||
|
|
||||||
|
def test_no_compression_compat_header(self):
|
||||||
|
with FakeYDL() as ydl:
|
||||||
|
data = ydl.urlopen(
|
||||||
|
sanitized_Request(
|
||||||
|
f'http://127.0.0.1:{self.http_port}/headers',
|
||||||
|
headers={'Youtubedl-no-compression': True})).read()
|
||||||
|
self.assertIn(b'Accept-Encoding: identity', data)
|
||||||
|
self.assertNotIn(b'youtubedl-no-compression', data.lower())
|
||||||
|
|
||||||
|
def test_gzip_trailing_garbage(self):
|
||||||
|
# https://github.com/ytdl-org/youtube-dl/commit/aa3e950764337ef9800c936f4de89b31c00dfcf5
|
||||||
|
# https://github.com/ytdl-org/youtube-dl/commit/6f2ec15cee79d35dba065677cad9da7491ec6e6f
|
||||||
|
with FakeYDL() as ydl:
|
||||||
|
data = ydl.urlopen(sanitized_Request(f'http://localhost:{self.http_port}/trailing_garbage')).read().decode('utf-8')
|
||||||
|
self.assertEqual(data, '<html><video src="/vid.mp4" /></html>')
|
||||||
|
|
||||||
|
@unittest.skipUnless(brotli, 'brotli support is not installed')
|
||||||
|
def test_brotli(self):
|
||||||
|
with FakeYDL() as ydl:
|
||||||
|
res = ydl.urlopen(
|
||||||
|
sanitized_Request(
|
||||||
|
f'http://127.0.0.1:{self.http_port}/content-encoding',
|
||||||
|
headers={'ytdl-encoding': 'br'}))
|
||||||
|
self.assertEqual(res.headers.get('Content-Encoding'), 'br')
|
||||||
|
self.assertEqual(res.read(), b'<html><video src="/vid.mp4" /></html>')
|
||||||
|
|
||||||
|
def test_deflate(self):
|
||||||
|
with FakeYDL() as ydl:
|
||||||
|
res = ydl.urlopen(
|
||||||
|
sanitized_Request(
|
||||||
|
f'http://127.0.0.1:{self.http_port}/content-encoding',
|
||||||
|
headers={'ytdl-encoding': 'deflate'}))
|
||||||
|
self.assertEqual(res.headers.get('Content-Encoding'), 'deflate')
|
||||||
|
self.assertEqual(res.read(), b'<html><video src="/vid.mp4" /></html>')
|
||||||
|
|
||||||
|
def test_gzip(self):
|
||||||
|
with FakeYDL() as ydl:
|
||||||
|
res = ydl.urlopen(
|
||||||
|
sanitized_Request(
|
||||||
|
f'http://127.0.0.1:{self.http_port}/content-encoding',
|
||||||
|
headers={'ytdl-encoding': 'gzip'}))
|
||||||
|
self.assertEqual(res.headers.get('Content-Encoding'), 'gzip')
|
||||||
|
self.assertEqual(res.read(), b'<html><video src="/vid.mp4" /></html>')
|
||||||
|
|
||||||
|
def test_multiple_encodings(self):
|
||||||
|
# https://www.rfc-editor.org/rfc/rfc9110.html#section-8.4
|
||||||
|
with FakeYDL() as ydl:
|
||||||
|
for pair in ('gzip,deflate', 'deflate, gzip', 'gzip, gzip', 'deflate, deflate'):
|
||||||
|
res = ydl.urlopen(
|
||||||
|
sanitized_Request(
|
||||||
|
f'http://127.0.0.1:{self.http_port}/content-encoding',
|
||||||
|
headers={'ytdl-encoding': pair}))
|
||||||
|
self.assertEqual(res.headers.get('Content-Encoding'), pair)
|
||||||
|
self.assertEqual(res.read(), b'<html><video src="/vid.mp4" /></html>')
|
||||||
|
|
||||||
|
def test_unsupported_encoding(self):
|
||||||
|
# it should return the raw content
|
||||||
|
with FakeYDL() as ydl:
|
||||||
|
res = ydl.urlopen(
|
||||||
|
sanitized_Request(
|
||||||
|
f'http://127.0.0.1:{self.http_port}/content-encoding',
|
||||||
|
headers={'ytdl-encoding': 'unsupported'}))
|
||||||
|
self.assertEqual(res.headers.get('Content-Encoding'), 'unsupported')
|
||||||
|
self.assertEqual(res.read(), b'raw')
|
||||||
|
|
||||||
|
|
||||||
class TestClientCert(unittest.TestCase):
|
class TestClientCert(unittest.TestCase):
|
||||||
@ -112,8 +403,8 @@ def _run_test(self, **params):
|
|||||||
'nocheckcertificate': True,
|
'nocheckcertificate': True,
|
||||||
**params,
|
**params,
|
||||||
})
|
})
|
||||||
r = ydl.extract_info('https://127.0.0.1:%d/video.html' % self.port)
|
r = ydl.extract_info(f'https://127.0.0.1:{self.port}/video.html')
|
||||||
self.assertEqual(r['url'], 'https://127.0.0.1:%d/vid.mp4' % self.port)
|
self.assertEqual(r['url'], f'https://127.0.0.1:{self.port}/vid.mp4')
|
||||||
|
|
||||||
def test_certificate_combined_nopass(self):
|
def test_certificate_combined_nopass(self):
|
||||||
self._run_test(client_certificate=os.path.join(self.certdir, 'clientwithkey.crt'))
|
self._run_test(client_certificate=os.path.join(self.certdir, 'clientwithkey.crt'))
|
||||||
@ -188,5 +479,22 @@ def test_proxy_with_idn(self):
|
|||||||
self.assertEqual(response, 'normal: http://xn--fiq228c.tw/')
|
self.assertEqual(response, 'normal: http://xn--fiq228c.tw/')
|
||||||
|
|
||||||
|
|
||||||
|
class TestFileURL(unittest.TestCase):
|
||||||
|
# See https://github.com/ytdl-org/youtube-dl/issues/8227
|
||||||
|
def test_file_urls(self):
|
||||||
|
tf = tempfile.NamedTemporaryFile(delete=False)
|
||||||
|
tf.write(b'foobar')
|
||||||
|
tf.close()
|
||||||
|
url = pathlib.Path(tf.name).as_uri()
|
||||||
|
with FakeYDL() as ydl:
|
||||||
|
self.assertRaisesRegex(
|
||||||
|
urllib.error.URLError, 'file:// URLs are explicitly disabled in yt-dlp for security reasons', ydl.urlopen, url)
|
||||||
|
with FakeYDL({'enable_file_urls': True}) as ydl:
|
||||||
|
res = ydl.urlopen(url)
|
||||||
|
self.assertEqual(res.read(), b'foobar')
|
||||||
|
res.close()
|
||||||
|
os.unlink(tf.name)
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
unittest.main()
|
unittest.main()
|
||||||
|
@ -8,410 +8,372 @@
|
|||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||||
|
|
||||||
import math
|
import math
|
||||||
import re
|
|
||||||
|
|
||||||
from yt_dlp.jsinterp import JS_Undefined, JSInterpreter
|
from yt_dlp.jsinterp import JS_Undefined, JSInterpreter
|
||||||
|
|
||||||
|
|
||||||
|
class NaN:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
class TestJSInterpreter(unittest.TestCase):
|
class TestJSInterpreter(unittest.TestCase):
|
||||||
|
def _test(self, jsi_or_code, expected, func='f', args=()):
|
||||||
|
if isinstance(jsi_or_code, str):
|
||||||
|
jsi_or_code = JSInterpreter(jsi_or_code)
|
||||||
|
got = jsi_or_code.call_function(func, *args)
|
||||||
|
if expected is NaN:
|
||||||
|
self.assertTrue(math.isnan(got), f'{got} is not NaN')
|
||||||
|
else:
|
||||||
|
self.assertEqual(got, expected)
|
||||||
|
|
||||||
def test_basic(self):
|
def test_basic(self):
|
||||||
jsi = JSInterpreter('function x(){;}')
|
jsi = JSInterpreter('function f(){;}')
|
||||||
self.assertEqual(jsi.call_function('x'), None)
|
self.assertEqual(repr(jsi.extract_function('f')), 'F<f>')
|
||||||
|
self._test(jsi, None)
|
||||||
|
|
||||||
jsi = JSInterpreter('function x3(){return 42;}')
|
self._test('function f(){return 42;}', 42)
|
||||||
self.assertEqual(jsi.call_function('x3'), 42)
|
self._test('function f(){42}', None)
|
||||||
|
self._test('var f = function(){return 42;}', 42)
|
||||||
|
|
||||||
jsi = JSInterpreter('function x3(){42}')
|
def test_add(self):
|
||||||
self.assertEqual(jsi.call_function('x3'), None)
|
self._test('function f(){return 42 + 7;}', 49)
|
||||||
|
self._test('function f(){return 42 + undefined;}', NaN)
|
||||||
|
self._test('function f(){return 42 + null;}', 42)
|
||||||
|
|
||||||
jsi = JSInterpreter('var x5 = function(){return 42;}')
|
def test_sub(self):
|
||||||
self.assertEqual(jsi.call_function('x5'), 42)
|
self._test('function f(){return 42 - 7;}', 35)
|
||||||
|
self._test('function f(){return 42 - undefined;}', NaN)
|
||||||
|
self._test('function f(){return 42 - null;}', 42)
|
||||||
|
|
||||||
|
def test_mul(self):
|
||||||
|
self._test('function f(){return 42 * 7;}', 294)
|
||||||
|
self._test('function f(){return 42 * undefined;}', NaN)
|
||||||
|
self._test('function f(){return 42 * null;}', 0)
|
||||||
|
|
||||||
|
def test_div(self):
|
||||||
|
jsi = JSInterpreter('function f(a, b){return a / b;}')
|
||||||
|
self._test(jsi, NaN, args=(0, 0))
|
||||||
|
self._test(jsi, NaN, args=(JS_Undefined, 1))
|
||||||
|
self._test(jsi, float('inf'), args=(2, 0))
|
||||||
|
self._test(jsi, 0, args=(0, 3))
|
||||||
|
|
||||||
|
def test_mod(self):
|
||||||
|
self._test('function f(){return 42 % 7;}', 0)
|
||||||
|
self._test('function f(){return 42 % 0;}', NaN)
|
||||||
|
self._test('function f(){return 42 % undefined;}', NaN)
|
||||||
|
|
||||||
|
def test_exp(self):
|
||||||
|
self._test('function f(){return 42 ** 2;}', 1764)
|
||||||
|
self._test('function f(){return 42 ** undefined;}', NaN)
|
||||||
|
self._test('function f(){return 42 ** null;}', 1)
|
||||||
|
self._test('function f(){return undefined ** 42;}', NaN)
|
||||||
|
|
||||||
def test_calc(self):
|
def test_calc(self):
|
||||||
jsi = JSInterpreter('function x4(a){return 2*a+1;}')
|
self._test('function f(a){return 2*a+1;}', 7, args=[3])
|
||||||
self.assertEqual(jsi.call_function('x4', 3), 7)
|
|
||||||
|
|
||||||
def test_empty_return(self):
|
def test_empty_return(self):
|
||||||
jsi = JSInterpreter('function f(){return; y()}')
|
self._test('function f(){return; y()}', None)
|
||||||
self.assertEqual(jsi.call_function('f'), None)
|
|
||||||
|
|
||||||
def test_morespace(self):
|
def test_morespace(self):
|
||||||
jsi = JSInterpreter('function x (a) { return 2 * a + 1 ; }')
|
self._test('function f (a) { return 2 * a + 1 ; }', 7, args=[3])
|
||||||
self.assertEqual(jsi.call_function('x', 3), 7)
|
self._test('function f () { x = 2 ; return x; }', 2)
|
||||||
|
|
||||||
jsi = JSInterpreter('function f () { x = 2 ; return x; }')
|
|
||||||
self.assertEqual(jsi.call_function('f'), 2)
|
|
||||||
|
|
||||||
def test_strange_chars(self):
|
def test_strange_chars(self):
|
||||||
jsi = JSInterpreter('function $_xY1 ($_axY1) { var $_axY2 = $_axY1 + 1; return $_axY2; }')
|
self._test('function $_xY1 ($_axY1) { var $_axY2 = $_axY1 + 1; return $_axY2; }',
|
||||||
self.assertEqual(jsi.call_function('$_xY1', 20), 21)
|
21, args=[20], func='$_xY1')
|
||||||
|
|
||||||
def test_operators(self):
|
def test_operators(self):
|
||||||
jsi = JSInterpreter('function f(){return 1 << 5;}')
|
self._test('function f(){return 1 << 5;}', 32)
|
||||||
self.assertEqual(jsi.call_function('f'), 32)
|
self._test('function f(){return 2 ** 5}', 32)
|
||||||
|
self._test('function f(){return 19 & 21;}', 17)
|
||||||
jsi = JSInterpreter('function f(){return 2 ** 5}')
|
self._test('function f(){return 11 >> 2;}', 2)
|
||||||
self.assertEqual(jsi.call_function('f'), 32)
|
self._test('function f(){return []? 2+3: 4;}', 5)
|
||||||
|
self._test('function f(){return 1 == 2}', False)
|
||||||
jsi = JSInterpreter('function f(){return 19 & 21;}')
|
self._test('function f(){return 0 && 1 || 2;}', 2)
|
||||||
self.assertEqual(jsi.call_function('f'), 17)
|
self._test('function f(){return 0 ?? 42;}', 0)
|
||||||
|
self._test('function f(){return "life, the universe and everything" < 42;}', False)
|
||||||
jsi = JSInterpreter('function f(){return 11 >> 2;}')
|
|
||||||
self.assertEqual(jsi.call_function('f'), 2)
|
|
||||||
|
|
||||||
jsi = JSInterpreter('function f(){return []? 2+3: 4;}')
|
|
||||||
self.assertEqual(jsi.call_function('f'), 5)
|
|
||||||
|
|
||||||
jsi = JSInterpreter('function f(){return 1 == 2}')
|
|
||||||
self.assertEqual(jsi.call_function('f'), False)
|
|
||||||
|
|
||||||
jsi = JSInterpreter('function f(){return 0 && 1 || 2;}')
|
|
||||||
self.assertEqual(jsi.call_function('f'), 2)
|
|
||||||
|
|
||||||
jsi = JSInterpreter('function f(){return 0 ?? 42;}')
|
|
||||||
self.assertEqual(jsi.call_function('f'), 0)
|
|
||||||
|
|
||||||
jsi = JSInterpreter('function f(){return "life, the universe and everything" < 42;}')
|
|
||||||
self.assertFalse(jsi.call_function('f'))
|
|
||||||
|
|
||||||
def test_array_access(self):
|
def test_array_access(self):
|
||||||
jsi = JSInterpreter('function f(){var x = [1,2,3]; x[0] = 4; x[0] = 5; x[2.0] = 7; return x;}')
|
self._test('function f(){var x = [1,2,3]; x[0] = 4; x[0] = 5; x[2.0] = 7; return x;}', [5, 2, 7])
|
||||||
self.assertEqual(jsi.call_function('f'), [5, 2, 7])
|
|
||||||
|
|
||||||
def test_parens(self):
|
def test_parens(self):
|
||||||
jsi = JSInterpreter('function f(){return (1) + (2) * ((( (( (((((3)))))) )) ));}')
|
self._test('function f(){return (1) + (2) * ((( (( (((((3)))))) )) ));}', 7)
|
||||||
self.assertEqual(jsi.call_function('f'), 7)
|
self._test('function f(){return (1 + 2) * 3;}', 9)
|
||||||
|
|
||||||
jsi = JSInterpreter('function f(){return (1 + 2) * 3;}')
|
|
||||||
self.assertEqual(jsi.call_function('f'), 9)
|
|
||||||
|
|
||||||
def test_quotes(self):
|
def test_quotes(self):
|
||||||
jsi = JSInterpreter(R'function f(){return "a\"\\("}')
|
self._test(R'function f(){return "a\"\\("}', R'a"\(')
|
||||||
self.assertEqual(jsi.call_function('f'), R'a"\(')
|
|
||||||
|
|
||||||
def test_assignments(self):
|
def test_assignments(self):
|
||||||
jsi = JSInterpreter('function f(){var x = 20; x = 30 + 1; return x;}')
|
self._test('function f(){var x = 20; x = 30 + 1; return x;}', 31)
|
||||||
self.assertEqual(jsi.call_function('f'), 31)
|
self._test('function f(){var x = 20; x += 30 + 1; return x;}', 51)
|
||||||
|
self._test('function f(){var x = 20; x -= 30 + 1; return x;}', -11)
|
||||||
jsi = JSInterpreter('function f(){var x = 20; x += 30 + 1; return x;}')
|
|
||||||
self.assertEqual(jsi.call_function('f'), 51)
|
|
||||||
|
|
||||||
jsi = JSInterpreter('function f(){var x = 20; x -= 30 + 1; return x;}')
|
|
||||||
self.assertEqual(jsi.call_function('f'), -11)
|
|
||||||
|
|
||||||
|
@unittest.skip('Not implemented')
|
||||||
def test_comments(self):
|
def test_comments(self):
|
||||||
'Skipping: Not yet fully implemented'
|
self._test('''
|
||||||
return
|
function f() {
|
||||||
jsi = JSInterpreter('''
|
var x = /* 1 + */ 2;
|
||||||
function x() {
|
var y = /* 30
|
||||||
var x = /* 1 + */ 2;
|
* 40 */ 50;
|
||||||
var y = /* 30
|
return x + y;
|
||||||
* 40 */ 50;
|
}
|
||||||
return x + y;
|
''', 52)
|
||||||
}
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), 52)
|
|
||||||
|
|
||||||
jsi = JSInterpreter('''
|
self._test('''
|
||||||
function f() {
|
function f() {
|
||||||
var x = "/*";
|
var x = "/*";
|
||||||
var y = 1 /* comment */ + 2;
|
var y = 1 /* comment */ + 2;
|
||||||
return y;
|
return y;
|
||||||
}
|
}
|
||||||
''')
|
''', 3)
|
||||||
self.assertEqual(jsi.call_function('f'), 3)
|
|
||||||
|
|
||||||
def test_precedence(self):
|
def test_precedence(self):
|
||||||
jsi = JSInterpreter('''
|
self._test('''
|
||||||
function x() {
|
function f() {
|
||||||
var a = [10, 20, 30, 40, 50];
|
var a = [10, 20, 30, 40, 50];
|
||||||
var b = 6;
|
var b = 6;
|
||||||
a[0]=a[b%a.length];
|
a[0]=a[b%a.length];
|
||||||
return a;
|
return a;
|
||||||
}''')
|
}
|
||||||
self.assertEqual(jsi.call_function('x'), [20, 20, 30, 40, 50])
|
''', [20, 20, 30, 40, 50])
|
||||||
|
|
||||||
def test_builtins(self):
|
def test_builtins(self):
|
||||||
jsi = JSInterpreter('''
|
self._test('function f() { return NaN }', NaN)
|
||||||
function x() { return NaN }
|
|
||||||
''')
|
|
||||||
self.assertTrue(math.isnan(jsi.call_function('x')))
|
|
||||||
|
|
||||||
jsi = JSInterpreter('''
|
def test_date(self):
|
||||||
function x() { return new Date('Wednesday 31 December 1969 18:01:26 MDT') - 0; }
|
self._test('function f() { return new Date("Wednesday 31 December 1969 18:01:26 MDT") - 0; }', 86000)
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), 86000)
|
jsi = JSInterpreter('function f(dt) { return new Date(dt) - 0; }')
|
||||||
jsi = JSInterpreter('''
|
self._test(jsi, 86000, args=['Wednesday 31 December 1969 18:01:26 MDT'])
|
||||||
function x(dt) { return new Date(dt) - 0; }
|
self._test(jsi, 86000, args=['12/31/1969 18:01:26 MDT']) # m/d/y
|
||||||
''')
|
self._test(jsi, 0, args=['1 January 1970 00:00:00 UTC'])
|
||||||
self.assertEqual(jsi.call_function('x', 'Wednesday 31 December 1969 18:01:26 MDT'), 86000)
|
|
||||||
|
|
||||||
def test_call(self):
|
def test_call(self):
|
||||||
jsi = JSInterpreter('''
|
jsi = JSInterpreter('''
|
||||||
function x() { return 2; }
|
function x() { return 2; }
|
||||||
function y(a) { return x() + (a?a:0); }
|
function y(a) { return x() + (a?a:0); }
|
||||||
function z() { return y(3); }
|
function z() { return y(3); }
|
||||||
''')
|
''')
|
||||||
self.assertEqual(jsi.call_function('z'), 5)
|
self._test(jsi, 5, func='z')
|
||||||
self.assertEqual(jsi.call_function('y'), 2)
|
self._test(jsi, 2, func='y')
|
||||||
|
|
||||||
|
def test_if(self):
|
||||||
|
self._test('''
|
||||||
|
function f() {
|
||||||
|
let a = 9;
|
||||||
|
if (0==0) {a++}
|
||||||
|
return a
|
||||||
|
}
|
||||||
|
''', 10)
|
||||||
|
|
||||||
|
self._test('''
|
||||||
|
function f() {
|
||||||
|
if (0==0) {return 10}
|
||||||
|
}
|
||||||
|
''', 10)
|
||||||
|
|
||||||
|
self._test('''
|
||||||
|
function f() {
|
||||||
|
if (0!=0) {return 1}
|
||||||
|
else {return 10}
|
||||||
|
}
|
||||||
|
''', 10)
|
||||||
|
|
||||||
|
""" # Unsupported
|
||||||
|
self._test('''
|
||||||
|
function f() {
|
||||||
|
if (0!=0) {return 1}
|
||||||
|
else if (1==0) {return 2}
|
||||||
|
else {return 10}
|
||||||
|
}
|
||||||
|
''', 10)
|
||||||
|
"""
|
||||||
|
|
||||||
def test_for_loop(self):
|
def test_for_loop(self):
|
||||||
jsi = JSInterpreter('''
|
self._test('function f() { a=0; for (i=0; i-10; i++) {a++} return a }', 10)
|
||||||
function x() { a=0; for (i=0; i-10; i++) {a++} return a }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), 10)
|
|
||||||
|
|
||||||
def test_switch(self):
|
def test_switch(self):
|
||||||
jsi = JSInterpreter('''
|
jsi = JSInterpreter('''
|
||||||
function x(f) { switch(f){
|
function f(x) { switch(x){
|
||||||
case 1:f+=1;
|
case 1:x+=1;
|
||||||
case 2:f+=2;
|
case 2:x+=2;
|
||||||
case 3:f+=3;break;
|
case 3:x+=3;break;
|
||||||
case 4:f+=4;
|
case 4:x+=4;
|
||||||
default:f=0;
|
default:x=0;
|
||||||
} return f }
|
} return x }
|
||||||
''')
|
''')
|
||||||
self.assertEqual(jsi.call_function('x', 1), 7)
|
self._test(jsi, 7, args=[1])
|
||||||
self.assertEqual(jsi.call_function('x', 3), 6)
|
self._test(jsi, 6, args=[3])
|
||||||
self.assertEqual(jsi.call_function('x', 5), 0)
|
self._test(jsi, 0, args=[5])
|
||||||
|
|
||||||
def test_switch_default(self):
|
def test_switch_default(self):
|
||||||
jsi = JSInterpreter('''
|
jsi = JSInterpreter('''
|
||||||
function x(f) { switch(f){
|
function f(x) { switch(x){
|
||||||
case 2: f+=2;
|
case 2: x+=2;
|
||||||
default: f-=1;
|
default: x-=1;
|
||||||
case 5:
|
case 5:
|
||||||
case 6: f+=6;
|
case 6: x+=6;
|
||||||
case 0: break;
|
case 0: break;
|
||||||
case 1: f+=1;
|
case 1: x+=1;
|
||||||
} return f }
|
} return x }
|
||||||
''')
|
''')
|
||||||
self.assertEqual(jsi.call_function('x', 1), 2)
|
self._test(jsi, 2, args=[1])
|
||||||
self.assertEqual(jsi.call_function('x', 5), 11)
|
self._test(jsi, 11, args=[5])
|
||||||
self.assertEqual(jsi.call_function('x', 9), 14)
|
self._test(jsi, 14, args=[9])
|
||||||
|
|
||||||
def test_try(self):
|
def test_try(self):
|
||||||
jsi = JSInterpreter('''
|
self._test('function f() { try{return 10} catch(e){return 5} }', 10)
|
||||||
function x() { try{return 10} catch(e){return 5} }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), 10)
|
|
||||||
|
|
||||||
def test_catch(self):
|
def test_catch(self):
|
||||||
jsi = JSInterpreter('''
|
self._test('function f() { try{throw 10} catch(e){return 5} }', 5)
|
||||||
function x() { try{throw 10} catch(e){return 5} }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), 5)
|
|
||||||
|
|
||||||
def test_finally(self):
|
def test_finally(self):
|
||||||
jsi = JSInterpreter('''
|
self._test('function f() { try{throw 10} finally {return 42} }', 42)
|
||||||
function x() { try{throw 10} finally {return 42} }
|
self._test('function f() { try{throw 10} catch(e){return 5} finally {return 42} }', 42)
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), 42)
|
|
||||||
jsi = JSInterpreter('''
|
|
||||||
function x() { try{throw 10} catch(e){return 5} finally {return 42} }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), 42)
|
|
||||||
|
|
||||||
def test_nested_try(self):
|
def test_nested_try(self):
|
||||||
jsi = JSInterpreter('''
|
self._test('''
|
||||||
function x() {try {
|
function f() {try {
|
||||||
try{throw 10} finally {throw 42}
|
try{throw 10} finally {throw 42}
|
||||||
} catch(e){return 5} }
|
} catch(e){return 5} }
|
||||||
''')
|
''', 5)
|
||||||
self.assertEqual(jsi.call_function('x'), 5)
|
|
||||||
|
|
||||||
def test_for_loop_continue(self):
|
def test_for_loop_continue(self):
|
||||||
jsi = JSInterpreter('''
|
self._test('function f() { a=0; for (i=0; i-10; i++) { continue; a++ } return a }', 0)
|
||||||
function x() { a=0; for (i=0; i-10; i++) { continue; a++ } return a }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), 0)
|
|
||||||
|
|
||||||
def test_for_loop_break(self):
|
def test_for_loop_break(self):
|
||||||
jsi = JSInterpreter('''
|
self._test('function f() { a=0; for (i=0; i-10; i++) { break; a++ } return a }', 0)
|
||||||
function x() { a=0; for (i=0; i-10; i++) { break; a++ } return a }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), 0)
|
|
||||||
|
|
||||||
def test_for_loop_try(self):
|
def test_for_loop_try(self):
|
||||||
jsi = JSInterpreter('''
|
self._test('''
|
||||||
function x() {
|
function f() {
|
||||||
for (i=0; i-10; i++) { try { if (i == 5) throw i} catch {return 10} finally {break} };
|
for (i=0; i-10; i++) { try { if (i == 5) throw i} catch {return 10} finally {break} };
|
||||||
return 42 }
|
return 42 }
|
||||||
''')
|
''', 42)
|
||||||
self.assertEqual(jsi.call_function('x'), 42)
|
|
||||||
|
|
||||||
def test_literal_list(self):
|
def test_literal_list(self):
|
||||||
jsi = JSInterpreter('''
|
self._test('function f() { return [1, 2, "asdf", [5, 6, 7]][3] }', [5, 6, 7])
|
||||||
function x() { return [1, 2, "asdf", [5, 6, 7]][3] }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), [5, 6, 7])
|
|
||||||
|
|
||||||
def test_comma(self):
|
def test_comma(self):
|
||||||
jsi = JSInterpreter('''
|
self._test('function f() { a=5; a -= 1, a+=3; return a }', 7)
|
||||||
function x() { a=5; a -= 1, a+=3; return a }
|
self._test('function f() { a=5; return (a -= 1, a+=3, a); }', 7)
|
||||||
''')
|
self._test('function f() { return (l=[0,1,2,3], function(a, b){return a+b})((l[1], l[2]), l[3]) }', 5)
|
||||||
self.assertEqual(jsi.call_function('x'), 7)
|
|
||||||
|
|
||||||
jsi = JSInterpreter('''
|
|
||||||
function x() { a=5; return (a -= 1, a+=3, a); }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), 7)
|
|
||||||
|
|
||||||
jsi = JSInterpreter('''
|
|
||||||
function x() { return (l=[0,1,2,3], function(a, b){return a+b})((l[1], l[2]), l[3]) }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), 5)
|
|
||||||
|
|
||||||
def test_void(self):
|
def test_void(self):
|
||||||
jsi = JSInterpreter('''
|
self._test('function f() { return void 42; }', None)
|
||||||
function x() { return void 42; }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), None)
|
|
||||||
|
|
||||||
def test_return_function(self):
|
def test_return_function(self):
|
||||||
jsi = JSInterpreter('''
|
jsi = JSInterpreter('''
|
||||||
function x() { return [1, function(){return 1}][1] }
|
function f() { return [1, function(){return 1}][1] }
|
||||||
''')
|
''')
|
||||||
self.assertEqual(jsi.call_function('x')([]), 1)
|
self.assertEqual(jsi.call_function('f')([]), 1)
|
||||||
|
|
||||||
def test_null(self):
|
def test_null(self):
|
||||||
jsi = JSInterpreter('''
|
self._test('function f() { return null; }', None)
|
||||||
function x() { return null; }
|
self._test('function f() { return [null > 0, null < 0, null == 0, null === 0]; }',
|
||||||
''')
|
[False, False, False, False])
|
||||||
self.assertEqual(jsi.call_function('x'), None)
|
self._test('function f() { return [null >= 0, null <= 0]; }', [True, True])
|
||||||
|
|
||||||
jsi = JSInterpreter('''
|
|
||||||
function x() { return [null > 0, null < 0, null == 0, null === 0]; }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), [False, False, False, False])
|
|
||||||
|
|
||||||
jsi = JSInterpreter('''
|
|
||||||
function x() { return [null >= 0, null <= 0]; }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), [True, True])
|
|
||||||
|
|
||||||
def test_undefined(self):
|
def test_undefined(self):
|
||||||
jsi = JSInterpreter('''
|
self._test('function f() { return undefined === undefined; }', True)
|
||||||
function x() { return undefined === undefined; }
|
self._test('function f() { return undefined; }', JS_Undefined)
|
||||||
''')
|
self._test('function f() {return undefined ?? 42; }', 42)
|
||||||
self.assertEqual(jsi.call_function('x'), True)
|
self._test('function f() { let v; return v; }', JS_Undefined)
|
||||||
|
self._test('function f() { let v; return v**0; }', 1)
|
||||||
|
self._test('function f() { let v; return [v>42, v<=42, v&&42, 42&&v]; }',
|
||||||
|
[False, False, JS_Undefined, JS_Undefined])
|
||||||
|
|
||||||
|
self._test('''
|
||||||
|
function f() { return [
|
||||||
|
undefined === undefined,
|
||||||
|
undefined == undefined,
|
||||||
|
undefined == null,
|
||||||
|
undefined < undefined,
|
||||||
|
undefined > undefined,
|
||||||
|
undefined === 0,
|
||||||
|
undefined == 0,
|
||||||
|
undefined < 0,
|
||||||
|
undefined > 0,
|
||||||
|
undefined >= 0,
|
||||||
|
undefined <= 0,
|
||||||
|
undefined > null,
|
||||||
|
undefined < null,
|
||||||
|
undefined === null
|
||||||
|
]; }
|
||||||
|
''', list(map(bool, (1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0))))
|
||||||
|
|
||||||
jsi = JSInterpreter('''
|
jsi = JSInterpreter('''
|
||||||
function x() { return undefined; }
|
function f() { let v; return [42+v, v+42, v**42, 42**v, 0**v]; }
|
||||||
''')
|
''')
|
||||||
self.assertEqual(jsi.call_function('x'), JS_Undefined)
|
for y in jsi.call_function('f'):
|
||||||
|
|
||||||
jsi = JSInterpreter('''
|
|
||||||
function x() { let v; return v; }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), JS_Undefined)
|
|
||||||
|
|
||||||
jsi = JSInterpreter('''
|
|
||||||
function x() { return [undefined === undefined, undefined == undefined, undefined < undefined, undefined > undefined]; }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), [True, True, False, False])
|
|
||||||
|
|
||||||
jsi = JSInterpreter('''
|
|
||||||
function x() { return [undefined === 0, undefined == 0, undefined < 0, undefined > 0]; }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), [False, False, False, False])
|
|
||||||
|
|
||||||
jsi = JSInterpreter('''
|
|
||||||
function x() { return [undefined >= 0, undefined <= 0]; }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), [False, False])
|
|
||||||
|
|
||||||
jsi = JSInterpreter('''
|
|
||||||
function x() { return [undefined > null, undefined < null, undefined == null, undefined === null]; }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), [False, False, True, False])
|
|
||||||
|
|
||||||
jsi = JSInterpreter('''
|
|
||||||
function x() { return [undefined === null, undefined == null, undefined < null, undefined > null]; }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), [False, True, False, False])
|
|
||||||
|
|
||||||
jsi = JSInterpreter('''
|
|
||||||
function x() { let v; return [42+v, v+42, v**42, 42**v, 0**v]; }
|
|
||||||
''')
|
|
||||||
for y in jsi.call_function('x'):
|
|
||||||
self.assertTrue(math.isnan(y))
|
self.assertTrue(math.isnan(y))
|
||||||
|
|
||||||
jsi = JSInterpreter('''
|
|
||||||
function x() { let v; return v**0; }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), 1)
|
|
||||||
|
|
||||||
jsi = JSInterpreter('''
|
|
||||||
function x() { let v; return [v>42, v<=42, v&&42, 42&&v]; }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), [False, False, JS_Undefined, JS_Undefined])
|
|
||||||
|
|
||||||
jsi = JSInterpreter('function x(){return undefined ?? 42; }')
|
|
||||||
self.assertEqual(jsi.call_function('x'), 42)
|
|
||||||
|
|
||||||
def test_object(self):
|
def test_object(self):
|
||||||
jsi = JSInterpreter('''
|
self._test('function f() { return {}; }', {})
|
||||||
function x() { return {}; }
|
self._test('function f() { let a = {m1: 42, m2: 0 }; return [a["m1"], a.m2]; }', [42, 0])
|
||||||
''')
|
self._test('function f() { let a; return a?.qq; }', JS_Undefined)
|
||||||
self.assertEqual(jsi.call_function('x'), {})
|
self._test('function f() { let a = {m1: 42, m2: 0 }; return a?.qq; }', JS_Undefined)
|
||||||
|
|
||||||
jsi = JSInterpreter('''
|
|
||||||
function x() { let a = {m1: 42, m2: 0 }; return [a["m1"], a.m2]; }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), [42, 0])
|
|
||||||
|
|
||||||
jsi = JSInterpreter('''
|
|
||||||
function x() { let a; return a?.qq; }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), JS_Undefined)
|
|
||||||
|
|
||||||
jsi = JSInterpreter('''
|
|
||||||
function x() { let a = {m1: 42, m2: 0 }; return a?.qq; }
|
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), JS_Undefined)
|
|
||||||
|
|
||||||
def test_regex(self):
|
def test_regex(self):
|
||||||
jsi = JSInterpreter('''
|
self._test('function f() { let a=/,,[/,913,/](,)}/; }', None)
|
||||||
function x() { let a=/,,[/,913,/](,)}/; }
|
self._test('function f() { let a=/,,[/,913,/](,)}/; return a; }', R'/,,[/,913,/](,)}/0')
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x'), None)
|
|
||||||
|
|
||||||
jsi = JSInterpreter('''
|
R''' # We are not compiling regex
|
||||||
function x() { let a=/,,[/,913,/](,)}/; return a; }
|
jsi = JSInterpreter('function f() { let a=/,,[/,913,/](,)}/; return a; }')
|
||||||
''')
|
self.assertIsInstance(jsi.call_function('f'), re.Pattern)
|
||||||
self.assertIsInstance(jsi.call_function('x'), re.Pattern)
|
|
||||||
|
|
||||||
jsi = JSInterpreter('''
|
jsi = JSInterpreter('function f() { let a=/,,[/,913,/](,)}/i; return a; }')
|
||||||
function x() { let a=/,,[/,913,/](,)}/i; return a; }
|
self.assertEqual(jsi.call_function('f').flags & re.I, re.I)
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x').flags & re.I, re.I)
|
|
||||||
|
|
||||||
jsi = JSInterpreter(R'''
|
jsi = JSInterpreter(R'function f() { let a=/,][}",],()}(\[)/; return a; }')
|
||||||
function x() { let a=/,][}",],()}(\[)/; return a; }
|
self.assertEqual(jsi.call_function('f').pattern, r',][}",],()}(\[)')
|
||||||
''')
|
|
||||||
self.assertEqual(jsi.call_function('x').pattern, r',][}",],()}(\[)')
|
|
||||||
|
|
||||||
jsi = JSInterpreter(R'''
|
jsi = JSInterpreter(R'function f() { let a=[/[)\\]/]; return a[0]; }')
|
||||||
function x() { let a=[/[)\\]/]; return a[0]; }
|
self.assertEqual(jsi.call_function('f').pattern, r'[)\\]')
|
||||||
''')
|
'''
|
||||||
self.assertEqual(jsi.call_function('x').pattern, r'[)\\]')
|
|
||||||
|
@unittest.skip('Not implemented')
|
||||||
|
def test_replace(self):
|
||||||
|
self._test('function f() { let a="data-name".replace("data-", ""); return a }',
|
||||||
|
'name')
|
||||||
|
self._test('function f() { let a="data-name".replace(new RegExp("^.+-"), ""); return a; }',
|
||||||
|
'name')
|
||||||
|
self._test('function f() { let a="data-name".replace(/^.+-/, ""); return a; }',
|
||||||
|
'name')
|
||||||
|
self._test('function f() { let a="data-name".replace(/a/g, "o"); return a; }',
|
||||||
|
'doto-nome')
|
||||||
|
self._test('function f() { let a="data-name".replaceAll("a", "o"); return a; }',
|
||||||
|
'doto-nome')
|
||||||
|
|
||||||
def test_char_code_at(self):
|
def test_char_code_at(self):
|
||||||
jsi = JSInterpreter('function x(i){return "test".charCodeAt(i)}')
|
jsi = JSInterpreter('function f(i){return "test".charCodeAt(i)}')
|
||||||
self.assertEqual(jsi.call_function('x', 0), 116)
|
self._test(jsi, 116, args=[0])
|
||||||
self.assertEqual(jsi.call_function('x', 1), 101)
|
self._test(jsi, 101, args=[1])
|
||||||
self.assertEqual(jsi.call_function('x', 2), 115)
|
self._test(jsi, 115, args=[2])
|
||||||
self.assertEqual(jsi.call_function('x', 3), 116)
|
self._test(jsi, 116, args=[3])
|
||||||
self.assertEqual(jsi.call_function('x', 4), None)
|
self._test(jsi, None, args=[4])
|
||||||
self.assertEqual(jsi.call_function('x', 'not_a_number'), 116)
|
self._test(jsi, 116, args=['not_a_number'])
|
||||||
|
|
||||||
def test_bitwise_operators_overflow(self):
|
def test_bitwise_operators_overflow(self):
|
||||||
jsi = JSInterpreter('function x(){return -524999584 << 5}')
|
self._test('function f(){return -524999584 << 5}', 379882496)
|
||||||
self.assertEqual(jsi.call_function('x'), 379882496)
|
self._test('function f(){return 1236566549 << 5}', 915423904)
|
||||||
|
|
||||||
jsi = JSInterpreter('function x(){return 1236566549 << 5}')
|
def test_bitwise_operators_typecast(self):
|
||||||
self.assertEqual(jsi.call_function('x'), 915423904)
|
self._test('function f(){return null << 5}', 0)
|
||||||
|
self._test('function f(){return undefined >> 5}', 0)
|
||||||
|
self._test('function f(){return 42 << NaN}', 42)
|
||||||
|
|
||||||
|
def test_negative(self):
|
||||||
|
self._test('function f(){return 2 * -2.0 ;}', -4)
|
||||||
|
self._test('function f(){return 2 - - -2 ;}', 0)
|
||||||
|
self._test('function f(){return 2 - - - -2 ;}', 4)
|
||||||
|
self._test('function f(){return 2 - + + - -2;}', 0)
|
||||||
|
self._test('function f(){return 2 + - + - -2;}', 0)
|
||||||
|
|
||||||
|
@unittest.skip('Not implemented')
|
||||||
|
def test_packed(self):
|
||||||
|
jsi = JSInterpreter('''function f(p,a,c,k,e,d){while(c--)if(k[c])p=p.replace(new RegExp('\\b'+c.toString(a)+'\\b','g'),k[c]);return p}''')
|
||||||
|
self.assertEqual(jsi.call_function('f', '''h 7=g("1j");7.7h({7g:[{33:"w://7f-7e-7d-7c.v.7b/7a/79/78/77/76.74?t=73&s=2s&e=72&f=2t&71=70.0.0.1&6z=6y&6x=6w"}],6v:"w://32.v.u/6u.31",16:"r%",15:"r%",6t:"6s",6r:"",6q:"l",6p:"l",6o:"6n",6m:\'6l\',6k:"6j",9:[{33:"/2u?b=6i&n=50&6h=w://32.v.u/6g.31",6f:"6e"}],1y:{6d:1,6c:\'#6b\',6a:\'#69\',68:"67",66:30,65:r,},"64":{63:"%62 2m%m%61%5z%5y%5x.u%5w%5v%5u.2y%22 2k%m%1o%22 5t%m%1o%22 5s%m%1o%22 2j%m%5r%22 16%m%5q%22 15%m%5p%22 5o%2z%5n%5m%2z",5l:"w://v.u/d/1k/5k.2y",5j:[]},\'5i\':{"5h":"5g"},5f:"5e",5d:"w://v.u",5c:{},5b:l,1x:[0.25,0.50,0.75,1,1.25,1.5,2]});h 1m,1n,5a;h 59=0,58=0;h 7=g("1j");h 2x=0,57=0,56=0;$.55({54:{\'53-52\':\'2i-51\'}});7.j(\'4z\',6(x){c(5>0&&x.1l>=5&&1n!=1){1n=1;$(\'q.4y\').4x(\'4w\')}});7.j(\'13\',6(x){2x=x.1l});7.j(\'2g\',6(x){2w(x)});7.j(\'4v\',6(){$(\'q.2v\').4u()});6 2w(x){$(\'q.2v\').4t();c(1m)19;1m=1;17=0;c(4s.4r===l){17=1}$.4q(\'/2u?b=4p&2l=1k&4o=2t-4n-4m-2s-4l&4k=&4j=&4i=&17=\'+17,6(2r){$(\'#4h\').4g(2r)});$(\'.3-8-4f-4e:4d("4c")\').2h(6(e){2q();g().4b(0);g().4a(l)});6 2q(){h $14=$("<q />").2p({1l:"49",16:"r%",15:"r%",48:0,2n:0,2o:47,46:"45(10%, 10%, 10%, 0.4)","44-43":"42"});$("<41 />").2p({16:"60%",15:"60%",2o:40,"3z-2n":"3y"}).3x({\'2m\':\'/?b=3w&2l=1k\',\'2k\':\'0\',\'2j\':\'2i\'}).2f($14);$14.2h(6(){$(3v).3u();g().2g()});$14.2f($(\'#1j\'))}g().13(0);}6 3t(){h 9=7.1b(2e);2d.2c(9);c(9.n>1){1r(i=0;i<9.n;i++){c(9[i].1a==2e){2d.2c(\'!!=\'+i);7.1p(i)}}}}7.j(\'3s\',6(){g().1h("/2a/3r.29","3q 10 28",6(){g().13(g().27()+10)},"2b");$("q[26=2b]").23().21(\'.3-20-1z\');g().1h("/2a/3p.29","3o 10 28",6(){h 12=g().27()-10;c(12<0)12=0;g().13(12)},"24");$("q[26=24]").23().21(\'.3-20-1z\');});6 1i(){}7.j(\'3n\',6(){1i()});7.j(\'3m\',6(){1i()});7.j("k",6(y){h 9=7.1b();c(9.n<2)19;$(\'.3-8-3l-3k\').3j(6(){$(\'#3-8-a-k\').1e(\'3-8-a-z\');$(\'.3-a-k\').p(\'o-1f\',\'11\')});7.1h("/3i/3h.3g","3f 3e",6(){$(\'.3-1w\').3d(\'3-8-1v\');$(\'.3-8-1y, .3-8-1x\').p(\'o-1g\',\'11\');c($(\'.3-1w\').3c(\'3-8-1v\')){$(\'.3-a-k\').p(\'o-1g\',\'l\');$(\'.3-a-k\').p(\'o-1f\',\'l\');$(\'.3-8-a\').1e(\'3-8-a-z\');$(\'.3-8-a:1u\').3b(\'3-8-a-z\')}3a{$(\'.3-a-k\').p(\'o-1g\',\'11\');$(\'.3-a-k\').p(\'o-1f\',\'11\');$(\'.3-8-a:1u\').1e(\'3-8-a-z\')}},"39");7.j("38",6(y){1d.37(\'1c\',y.9[y.36].1a)});c(1d.1t(\'1c\')){35("1s(1d.1t(\'1c\'));",34)}});h 18;6 1s(1q){h 9=7.1b();c(9.n>1){1r(i=0;i<9.n;i++){c(9[i].1a==1q){c(i==18){19}18=i;7.1p(i)}}}}',36,270,'|||jw|||function|player|settings|tracks|submenu||if||||jwplayer|var||on|audioTracks|true|3D|length|aria|attr|div|100|||sx|filemoon|https||event|active||false|tt|seek|dd|height|width|adb|current_audio|return|name|getAudioTracks|default_audio|localStorage|removeClass|expanded|checked|addButton|callMeMaybe|vplayer|0fxcyc2ajhp1|position|vvplay|vvad|220|setCurrentAudioTrack|audio_name|for|audio_set|getItem|last|open|controls|playbackRates|captions|rewind|icon|insertAfter||detach|ff00||button|getPosition|sec|png|player8|ff11|log|console|track_name|appendTo|play|click|no|scrolling|frameborder|file_code|src|top|zIndex|css|showCCform|data|1662367683|383371|dl|video_ad|doPlay|prevt|mp4|3E||jpg|thumbs|file|300|setTimeout|currentTrack|setItem|audioTrackChanged|dualSound|else|addClass|hasClass|toggleClass|Track|Audio|svg|dualy|images|mousedown|buttons|topbar|playAttemptFailed|beforePlay|Rewind|fr|Forward|ff|ready|set_audio_track|remove|this|upload_srt|prop|50px|margin|1000001|iframe|center|align|text|rgba|background|1000000|left|absolute|pause|setCurrentCaptions|Upload|contains|item|content|html|fviews|referer|prem|embed|3e57249ef633e0d03bf76ceb8d8a4b65|216|83|hash|view|get|TokenZir|window|hide|show|complete|slow|fadeIn|video_ad_fadein|time||cache|Cache|Content|headers|ajaxSetup|v2done|tott|vastdone2|vastdone1|vvbefore|playbackRateControls|cast|aboutlink|FileMoon|abouttext|UHD|1870|qualityLabels|sites|GNOME_POWER|link|2Fiframe|3C|allowfullscreen|22360|22640|22no|marginheight|marginwidth|2FGNOME_POWER|2F0fxcyc2ajhp1|2Fe|2Ffilemoon|2F|3A||22https|3Ciframe|code|sharing|fontOpacity|backgroundOpacity|Tahoma|fontFamily|303030|backgroundColor|FFFFFF|color|userFontScale|thumbnails|kind|0fxcyc2ajhp10000|url|get_slides|start|startparam|none|preload|html5|primary|hlshtml|androidhls|duration|uniform|stretching|0fxcyc2ajhp1_xt|image|2048|sp|6871|asn|127|srv|43200|_g3XlBcu2lmD9oDexD2NLWSmah2Nu3XcDrl93m9PwXY|m3u8||master|0fxcyc2ajhp1_x|00076|01|hls2|to|s01|delivery|storage|moon|sources|setup'''.split('|')))
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
|
@ -5,6 +5,7 @@
|
|||||||
import re
|
import re
|
||||||
import sys
|
import sys
|
||||||
import unittest
|
import unittest
|
||||||
|
import warnings
|
||||||
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||||
|
|
||||||
@ -105,12 +106,14 @@
|
|||||||
sanitized_Request,
|
sanitized_Request,
|
||||||
shell_quote,
|
shell_quote,
|
||||||
smuggle_url,
|
smuggle_url,
|
||||||
|
str_or_none,
|
||||||
str_to_int,
|
str_to_int,
|
||||||
strip_jsonp,
|
strip_jsonp,
|
||||||
strip_or_none,
|
strip_or_none,
|
||||||
subtitles_filename,
|
subtitles_filename,
|
||||||
timeconvert,
|
timeconvert,
|
||||||
traverse_obj,
|
traverse_obj,
|
||||||
|
try_call,
|
||||||
unescapeHTML,
|
unescapeHTML,
|
||||||
unified_strdate,
|
unified_strdate,
|
||||||
unified_timestamp,
|
unified_timestamp,
|
||||||
@ -122,6 +125,7 @@
|
|||||||
urlencode_postdata,
|
urlencode_postdata,
|
||||||
urljoin,
|
urljoin,
|
||||||
urshift,
|
urshift,
|
||||||
|
variadic,
|
||||||
version_tuple,
|
version_tuple,
|
||||||
xpath_attr,
|
xpath_attr,
|
||||||
xpath_element,
|
xpath_element,
|
||||||
@ -1189,6 +1193,13 @@ def test_js_to_json_malformed(self):
|
|||||||
self.assertEqual(js_to_json('42a1'), '42"a1"')
|
self.assertEqual(js_to_json('42a1'), '42"a1"')
|
||||||
self.assertEqual(js_to_json('42a-1'), '42"a"-1')
|
self.assertEqual(js_to_json('42a-1'), '42"a"-1')
|
||||||
|
|
||||||
|
def test_js_to_json_template_literal(self):
|
||||||
|
self.assertEqual(js_to_json('`Hello ${name}`', {'name': '"world"'}), '"Hello world"')
|
||||||
|
self.assertEqual(js_to_json('`${name}${name}`', {'name': '"X"'}), '"XX"')
|
||||||
|
self.assertEqual(js_to_json('`${name}${name}`', {'name': '5'}), '"55"')
|
||||||
|
self.assertEqual(js_to_json('`${name}"${name}"`', {'name': '5'}), '"5\\"5\\""')
|
||||||
|
self.assertEqual(js_to_json('`${name}`', {}), '"name"')
|
||||||
|
|
||||||
def test_extract_attributes(self):
|
def test_extract_attributes(self):
|
||||||
self.assertEqual(extract_attributes('<e x="y">'), {'x': 'y'})
|
self.assertEqual(extract_attributes('<e x="y">'), {'x': 'y'})
|
||||||
self.assertEqual(extract_attributes("<e x='y'>"), {'x': 'y'})
|
self.assertEqual(extract_attributes("<e x='y'>"), {'x': 'y'})
|
||||||
@ -1966,6 +1977,35 @@ def test_get_compatible_ext(self):
|
|||||||
self.assertEqual(get_compatible_ext(
|
self.assertEqual(get_compatible_ext(
|
||||||
vcodecs=['av1'], acodecs=['mp4a'], vexts=['webm'], aexts=['m4a'], preferences=('webm', 'mkv')), 'mkv')
|
vcodecs=['av1'], acodecs=['mp4a'], vexts=['webm'], aexts=['m4a'], preferences=('webm', 'mkv')), 'mkv')
|
||||||
|
|
||||||
|
def test_try_call(self):
|
||||||
|
def total(*x, **kwargs):
|
||||||
|
return sum(x) + sum(kwargs.values())
|
||||||
|
|
||||||
|
self.assertEqual(try_call(None), None,
|
||||||
|
msg='not a fn should give None')
|
||||||
|
self.assertEqual(try_call(lambda: 1), 1,
|
||||||
|
msg='int fn with no expected_type should give int')
|
||||||
|
self.assertEqual(try_call(lambda: 1, expected_type=int), 1,
|
||||||
|
msg='int fn with expected_type int should give int')
|
||||||
|
self.assertEqual(try_call(lambda: 1, expected_type=dict), None,
|
||||||
|
msg='int fn with wrong expected_type should give None')
|
||||||
|
self.assertEqual(try_call(total, args=(0, 1, 0, ), expected_type=int), 1,
|
||||||
|
msg='fn should accept arglist')
|
||||||
|
self.assertEqual(try_call(total, kwargs={'a': 0, 'b': 1, 'c': 0}, expected_type=int), 1,
|
||||||
|
msg='fn should accept kwargs')
|
||||||
|
self.assertEqual(try_call(lambda: 1, expected_type=dict), None,
|
||||||
|
msg='int fn with no expected_type should give None')
|
||||||
|
self.assertEqual(try_call(lambda x: {}, total, args=(42, ), expected_type=int), 42,
|
||||||
|
msg='expect first int result with expected_type int')
|
||||||
|
|
||||||
|
def test_variadic(self):
|
||||||
|
self.assertEqual(variadic(None), (None, ))
|
||||||
|
self.assertEqual(variadic('spam'), ('spam', ))
|
||||||
|
self.assertEqual(variadic('spam', allowed_types=dict), 'spam')
|
||||||
|
with warnings.catch_warnings():
|
||||||
|
warnings.simplefilter('ignore')
|
||||||
|
self.assertEqual(variadic('spam', allowed_types=[dict]), 'spam')
|
||||||
|
|
||||||
def test_traverse_obj(self):
|
def test_traverse_obj(self):
|
||||||
_TEST_DATA = {
|
_TEST_DATA = {
|
||||||
100: 100,
|
100: 100,
|
||||||
@ -1999,8 +2039,8 @@ def test_traverse_obj(self):
|
|||||||
|
|
||||||
# Test Ellipsis behavior
|
# Test Ellipsis behavior
|
||||||
self.assertCountEqual(traverse_obj(_TEST_DATA, ...),
|
self.assertCountEqual(traverse_obj(_TEST_DATA, ...),
|
||||||
(item for item in _TEST_DATA.values() if item is not None),
|
(item for item in _TEST_DATA.values() if item not in (None, {})),
|
||||||
msg='`...` should give all values except `None`')
|
msg='`...` should give all non discarded values')
|
||||||
self.assertCountEqual(traverse_obj(_TEST_DATA, ('urls', 0, ...)), _TEST_DATA['urls'][0].values(),
|
self.assertCountEqual(traverse_obj(_TEST_DATA, ('urls', 0, ...)), _TEST_DATA['urls'][0].values(),
|
||||||
msg='`...` selection for dicts should select all values')
|
msg='`...` selection for dicts should select all values')
|
||||||
self.assertEqual(traverse_obj(_TEST_DATA, (..., ..., 'url')),
|
self.assertEqual(traverse_obj(_TEST_DATA, (..., ..., 'url')),
|
||||||
@ -2008,6 +2048,8 @@ def test_traverse_obj(self):
|
|||||||
msg='nested `...` queries should work')
|
msg='nested `...` queries should work')
|
||||||
self.assertCountEqual(traverse_obj(_TEST_DATA, (..., ..., 'index')), range(4),
|
self.assertCountEqual(traverse_obj(_TEST_DATA, (..., ..., 'index')), range(4),
|
||||||
msg='`...` query result should be flattened')
|
msg='`...` query result should be flattened')
|
||||||
|
self.assertEqual(traverse_obj(iter(range(4)), ...), list(range(4)),
|
||||||
|
msg='`...` should accept iterables')
|
||||||
|
|
||||||
# Test function as key
|
# Test function as key
|
||||||
self.assertEqual(traverse_obj(_TEST_DATA, lambda x, y: x == 'urls' and isinstance(y, list)),
|
self.assertEqual(traverse_obj(_TEST_DATA, lambda x, y: x == 'urls' and isinstance(y, list)),
|
||||||
@ -2015,6 +2057,42 @@ def test_traverse_obj(self):
|
|||||||
msg='function as query key should perform a filter based on (key, value)')
|
msg='function as query key should perform a filter based on (key, value)')
|
||||||
self.assertCountEqual(traverse_obj(_TEST_DATA, lambda _, x: isinstance(x[0], str)), {'str'},
|
self.assertCountEqual(traverse_obj(_TEST_DATA, lambda _, x: isinstance(x[0], str)), {'str'},
|
||||||
msg='exceptions in the query function should be catched')
|
msg='exceptions in the query function should be catched')
|
||||||
|
self.assertEqual(traverse_obj(iter(range(4)), lambda _, x: x % 2 == 0), [0, 2],
|
||||||
|
msg='function key should accept iterables')
|
||||||
|
if __debug__:
|
||||||
|
with self.assertRaises(Exception, msg='Wrong function signature should raise in debug'):
|
||||||
|
traverse_obj(_TEST_DATA, lambda a: ...)
|
||||||
|
with self.assertRaises(Exception, msg='Wrong function signature should raise in debug'):
|
||||||
|
traverse_obj(_TEST_DATA, lambda a, b, c: ...)
|
||||||
|
|
||||||
|
# Test set as key (transformation/type, like `expected_type`)
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, (..., {str.upper}, )), ['STR'],
|
||||||
|
msg='Function in set should be a transformation')
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, (..., {str})), ['str'],
|
||||||
|
msg='Type in set should be a type filter')
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, {dict}), _TEST_DATA,
|
||||||
|
msg='A single set should be wrapped into a path')
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, (..., {str.upper})), ['STR'],
|
||||||
|
msg='Transformation function should not raise')
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, (..., {str_or_none})),
|
||||||
|
[item for item in map(str_or_none, _TEST_DATA.values()) if item is not None],
|
||||||
|
msg='Function in set should be a transformation')
|
||||||
|
if __debug__:
|
||||||
|
with self.assertRaises(Exception, msg='Sets with length != 1 should raise in debug'):
|
||||||
|
traverse_obj(_TEST_DATA, set())
|
||||||
|
with self.assertRaises(Exception, msg='Sets with length != 1 should raise in debug'):
|
||||||
|
traverse_obj(_TEST_DATA, {str.upper, str})
|
||||||
|
|
||||||
|
# Test `slice` as a key
|
||||||
|
_SLICE_DATA = [0, 1, 2, 3, 4]
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, ('dict', slice(1))), None,
|
||||||
|
msg='slice on a dictionary should not throw')
|
||||||
|
self.assertEqual(traverse_obj(_SLICE_DATA, slice(1)), _SLICE_DATA[:1],
|
||||||
|
msg='slice key should apply slice to sequence')
|
||||||
|
self.assertEqual(traverse_obj(_SLICE_DATA, slice(1, 2)), _SLICE_DATA[1:2],
|
||||||
|
msg='slice key should apply slice to sequence')
|
||||||
|
self.assertEqual(traverse_obj(_SLICE_DATA, slice(1, 4, 2)), _SLICE_DATA[1:4:2],
|
||||||
|
msg='slice key should apply slice to sequence')
|
||||||
|
|
||||||
# Test alternative paths
|
# Test alternative paths
|
||||||
self.assertEqual(traverse_obj(_TEST_DATA, 'fail', 'str'), 'str',
|
self.assertEqual(traverse_obj(_TEST_DATA, 'fail', 'str'), 'str',
|
||||||
@ -2060,15 +2138,23 @@ def test_traverse_obj(self):
|
|||||||
{0: ['https://www.example.com/1', 'https://www.example.com/0']},
|
{0: ['https://www.example.com/1', 'https://www.example.com/0']},
|
||||||
msg='tripple nesting in dict path should be treated as branches')
|
msg='tripple nesting in dict path should be treated as branches')
|
||||||
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'fail'}), {},
|
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'fail'}), {},
|
||||||
msg='remove `None` values when dict key')
|
msg='remove `None` values when top level dict key fails')
|
||||||
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'fail'}, default=...), {0: ...},
|
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'fail'}, default=...), {0: ...},
|
||||||
msg='do not remove `None` values if `default`')
|
msg='use `default` if key fails and `default`')
|
||||||
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'dict'}), {0: {}},
|
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'dict'}), {},
|
||||||
msg='do not remove empty values when dict key')
|
msg='remove empty values when dict key')
|
||||||
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'dict'}, default=...), {0: {}},
|
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'dict'}, default=...), {0: ...},
|
||||||
msg='do not remove empty values when dict key and a default')
|
msg='use `default` when dict key and `default`')
|
||||||
self.assertEqual(traverse_obj(_TEST_DATA, {0: ('dict', ...)}), {0: []},
|
self.assertEqual(traverse_obj(_TEST_DATA, {0: {0: 'fail'}}), {},
|
||||||
msg='if branch in dict key not successful, return `[]`')
|
msg='remove empty values when nested dict key fails')
|
||||||
|
self.assertEqual(traverse_obj(None, {0: 'fail'}), {},
|
||||||
|
msg='default to dict if pruned')
|
||||||
|
self.assertEqual(traverse_obj(None, {0: 'fail'}, default=...), {0: ...},
|
||||||
|
msg='default to dict if pruned and default is given')
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, {0: {0: 'fail'}}, default=...), {0: {0: ...}},
|
||||||
|
msg='use nested `default` when nested dict key fails and `default`')
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, {0: ('dict', ...)}), {},
|
||||||
|
msg='remove key if branch in dict key not successful')
|
||||||
|
|
||||||
# Testing default parameter behavior
|
# Testing default parameter behavior
|
||||||
_DEFAULT_DATA = {'None': None, 'int': 0, 'list': []}
|
_DEFAULT_DATA = {'None': None, 'int': 0, 'list': []}
|
||||||
@ -2092,20 +2178,55 @@ def test_traverse_obj(self):
|
|||||||
msg='if branched but not successful return `[]`, not `default`')
|
msg='if branched but not successful return `[]`, not `default`')
|
||||||
self.assertEqual(traverse_obj(_DEFAULT_DATA, ('list', ...)), [],
|
self.assertEqual(traverse_obj(_DEFAULT_DATA, ('list', ...)), [],
|
||||||
msg='if branched but object is empty return `[]`, not `default`')
|
msg='if branched but object is empty return `[]`, not `default`')
|
||||||
|
self.assertEqual(traverse_obj(None, ...), [],
|
||||||
|
msg='if branched but object is `None` return `[]`, not `default`')
|
||||||
|
self.assertEqual(traverse_obj({0: None}, (0, ...)), [],
|
||||||
|
msg='if branched but state is `None` return `[]`, not `default`')
|
||||||
|
|
||||||
|
branching_paths = [
|
||||||
|
('fail', ...),
|
||||||
|
(..., 'fail'),
|
||||||
|
100 * ('fail',) + (...,),
|
||||||
|
(...,) + 100 * ('fail',),
|
||||||
|
]
|
||||||
|
for branching_path in branching_paths:
|
||||||
|
self.assertEqual(traverse_obj({}, branching_path), [],
|
||||||
|
msg='if branched but state is `None`, return `[]` (not `default`)')
|
||||||
|
self.assertEqual(traverse_obj({}, 'fail', branching_path), [],
|
||||||
|
msg='if branching in last alternative and previous did not match, return `[]` (not `default`)')
|
||||||
|
self.assertEqual(traverse_obj({0: 'x'}, 0, branching_path), 'x',
|
||||||
|
msg='if branching in last alternative and previous did match, return single value')
|
||||||
|
self.assertEqual(traverse_obj({0: 'x'}, branching_path, 0), 'x',
|
||||||
|
msg='if branching in first alternative and non-branching path does match, return single value')
|
||||||
|
self.assertEqual(traverse_obj({}, branching_path, 'fail'), None,
|
||||||
|
msg='if branching in first alternative and non-branching path does not match, return `default`')
|
||||||
|
|
||||||
# Testing expected_type behavior
|
# Testing expected_type behavior
|
||||||
_EXPECTED_TYPE_DATA = {'str': 'str', 'int': 0}
|
_EXPECTED_TYPE_DATA = {'str': 'str', 'int': 0}
|
||||||
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=str), 'str',
|
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=str),
|
||||||
msg='accept matching `expected_type` type')
|
'str', msg='accept matching `expected_type` type')
|
||||||
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=int), None,
|
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=int),
|
||||||
msg='reject non matching `expected_type` type')
|
None, msg='reject non matching `expected_type` type')
|
||||||
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'int', expected_type=lambda x: str(x)), '0',
|
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'int', expected_type=lambda x: str(x)),
|
||||||
msg='transform type using type function')
|
'0', msg='transform type using type function')
|
||||||
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'str',
|
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=lambda _: 1 / 0),
|
||||||
expected_type=lambda _: 1 / 0), None,
|
None, msg='wrap expected_type fuction in try_call')
|
||||||
msg='wrap expected_type fuction in try_call')
|
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, ..., expected_type=str),
|
||||||
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, ..., expected_type=str), ['str'],
|
['str'], msg='eliminate items that expected_type fails on')
|
||||||
msg='eliminate items that expected_type fails on')
|
self.assertEqual(traverse_obj(_TEST_DATA, {0: 100, 1: 1.2}, expected_type=int),
|
||||||
|
{0: 100}, msg='type as expected_type should filter dict values')
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, {0: 100, 1: 1.2, 2: 'None'}, expected_type=str_or_none),
|
||||||
|
{0: '100', 1: '1.2'}, msg='function as expected_type should transform dict values')
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, ({0: 1.2}, 0, {int_or_none}), expected_type=int),
|
||||||
|
1, msg='expected_type should not filter non final dict values')
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, {0: {0: 100, 1: 'str'}}, expected_type=int),
|
||||||
|
{0: {0: 100}}, msg='expected_type should transform deep dict values')
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, [({0: '...'}, {0: '...'})], expected_type=type(...)),
|
||||||
|
[{0: ...}, {0: ...}], msg='expected_type should transform branched dict values')
|
||||||
|
self.assertEqual(traverse_obj({1: {3: 4}}, [(1, 2), 3], expected_type=int),
|
||||||
|
[4], msg='expected_type regression for type matching in tuple branching')
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, ['data', ...], expected_type=int),
|
||||||
|
[], msg='expected_type regression for type matching in dict result')
|
||||||
|
|
||||||
# Test get_all behavior
|
# Test get_all behavior
|
||||||
_GET_ALL_DATA = {'key': [0, 1, 2]}
|
_GET_ALL_DATA = {'key': [0, 1, 2]}
|
||||||
@ -2145,14 +2266,23 @@ def test_traverse_obj(self):
|
|||||||
traverse_string=True), '.',
|
traverse_string=True), '.',
|
||||||
msg='traverse into converted data if `traverse_string`')
|
msg='traverse into converted data if `traverse_string`')
|
||||||
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', ...),
|
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', ...),
|
||||||
traverse_string=True), list('str'),
|
traverse_string=True), 'str',
|
||||||
msg='`...` branching into string should result in list')
|
msg='`...` should result in string (same value) if `traverse_string`')
|
||||||
|
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', slice(0, None, 2)),
|
||||||
|
traverse_string=True), 'sr',
|
||||||
|
msg='`slice` should result in string if `traverse_string`')
|
||||||
|
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', lambda i, v: i or v == "s"),
|
||||||
|
traverse_string=True), 'str',
|
||||||
|
msg='function should result in string if `traverse_string`')
|
||||||
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', (0, 2)),
|
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', (0, 2)),
|
||||||
traverse_string=True), ['s', 'r'],
|
traverse_string=True), ['s', 'r'],
|
||||||
msg='branching into string should result in list')
|
msg='branching should result in list if `traverse_string`')
|
||||||
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', lambda _, x: x),
|
self.assertEqual(traverse_obj({}, (0, ...), traverse_string=True), [],
|
||||||
traverse_string=True), list('str'),
|
msg='branching should result in list if `traverse_string`')
|
||||||
msg='function branching into string should result in list')
|
self.assertEqual(traverse_obj({}, (0, lambda x, y: True), traverse_string=True), [],
|
||||||
|
msg='branching should result in list if `traverse_string`')
|
||||||
|
self.assertEqual(traverse_obj({}, (0, slice(1)), traverse_string=True), [],
|
||||||
|
msg='branching should result in list if `traverse_string`')
|
||||||
|
|
||||||
# Test is_user_input behavior
|
# Test is_user_input behavior
|
||||||
_IS_USER_INPUT_DATA = {'range8': list(range(8))}
|
_IS_USER_INPUT_DATA = {'range8': list(range(8))}
|
||||||
@ -2189,6 +2319,8 @@ def test_traverse_obj(self):
|
|||||||
msg='failing str key on a `re.Match` should return `default`')
|
msg='failing str key on a `re.Match` should return `default`')
|
||||||
self.assertEqual(traverse_obj(mobj, 8), None,
|
self.assertEqual(traverse_obj(mobj, 8), None,
|
||||||
msg='failing int key on a `re.Match` should return `default`')
|
msg='failing int key on a `re.Match` should return `default`')
|
||||||
|
self.assertEqual(traverse_obj(mobj, lambda k, _: k in (0, 'group')), ['0123', '3'],
|
||||||
|
msg='function on a `re.Match` should give group name as well')
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
|
@ -62,10 +62,19 @@
|
|||||||
'https://s.ytimg.com/yts/jsbin/html5player-en_US-vflKjOTVq/html5player.js',
|
'https://s.ytimg.com/yts/jsbin/html5player-en_US-vflKjOTVq/html5player.js',
|
||||||
'312AA52209E3623129A412D56A40F11CB0AF14AE.3EE09501CB14E3BCDC3B2AE808BF3F1D14E7FBF12',
|
'312AA52209E3623129A412D56A40F11CB0AF14AE.3EE09501CB14E3BCDC3B2AE808BF3F1D14E7FBF12',
|
||||||
'112AA5220913623229A412D56A40F11CB0AF14AE.3EE0950FCB14EEBCDC3B2AE808BF331D14E7FBF3',
|
'112AA5220913623229A412D56A40F11CB0AF14AE.3EE0950FCB14EEBCDC3B2AE808BF331D14E7FBF3',
|
||||||
)
|
),
|
||||||
|
(
|
||||||
|
'https://www.youtube.com/s/player/6ed0d907/player_ias.vflset/en_US/base.js',
|
||||||
|
'2aq0aqSyOoJXtK73m-uME_jv7-pT15gOFC02RFkGMqWpzEICs69VdbwQ0LDp1v7j8xx92efCJlYFYb1sUkkBSPOlPmXgIARw8JQ0qOAOAA',
|
||||||
|
'AOq0QJ8wRAIgXmPlOPSBkkUs1bYFYlJCfe29xx8j7v1pDL2QwbdV96sCIEzpWqMGkFR20CFOg51Tp-7vj_EMu-m37KtXJoOySqa0',
|
||||||
|
),
|
||||||
]
|
]
|
||||||
|
|
||||||
_NSIG_TESTS = [
|
_NSIG_TESTS = [
|
||||||
|
(
|
||||||
|
'https://www.youtube.com/s/player/7862ca1f/player_ias.vflset/en_US/base.js',
|
||||||
|
'X_LCxVDjAavgE5t', 'yxJ1dM6iz5ogUg',
|
||||||
|
),
|
||||||
(
|
(
|
||||||
'https://www.youtube.com/s/player/9216d1f7/player_ias.vflset/en_US/base.js',
|
'https://www.youtube.com/s/player/9216d1f7/player_ias.vflset/en_US/base.js',
|
||||||
'SLp9F5bwjAdhE9F-', 'gWnb9IK2DJ8Q1w',
|
'SLp9F5bwjAdhE9F-', 'gWnb9IK2DJ8Q1w',
|
||||||
@ -134,6 +143,26 @@
|
|||||||
'https://www.youtube.com/s/player/7a062b77/player_ias.vflset/en_US/base.js',
|
'https://www.youtube.com/s/player/7a062b77/player_ias.vflset/en_US/base.js',
|
||||||
'NRcE3y3mVtm_cV-W', 'VbsCYUATvqlt5w',
|
'NRcE3y3mVtm_cV-W', 'VbsCYUATvqlt5w',
|
||||||
),
|
),
|
||||||
|
(
|
||||||
|
'https://www.youtube.com/s/player/dac945fd/player_ias.vflset/en_US/base.js',
|
||||||
|
'o8BkRxXhuYsBCWi6RplPdP', '3Lx32v_hmzTm6A',
|
||||||
|
),
|
||||||
|
(
|
||||||
|
'https://www.youtube.com/s/player/6f20102c/player_ias.vflset/en_US/base.js',
|
||||||
|
'lE8DhoDmKqnmJJ', 'pJTTX6XyJP2BYw',
|
||||||
|
),
|
||||||
|
(
|
||||||
|
'https://www.youtube.com/s/player/cfa9e7cb/player_ias.vflset/en_US/base.js',
|
||||||
|
'aCi3iElgd2kq0bxVbQ', 'QX1y8jGb2IbZ0w',
|
||||||
|
),
|
||||||
|
(
|
||||||
|
'https://www.youtube.com/s/player/8c7583ff/player_ias.vflset/en_US/base.js',
|
||||||
|
'1wWCVpRR96eAmMI87L', 'KSkWAVv1ZQxC3A',
|
||||||
|
),
|
||||||
|
(
|
||||||
|
'https://www.youtube.com/s/player/b7910ca8/player_ias.vflset/en_US/base.js',
|
||||||
|
'_hXMCwMt9qE310D', 'LoZMgkkofRMCZQ',
|
||||||
|
),
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
@ -210,7 +239,7 @@ def n_sig(jscode, sig_input):
|
|||||||
|
|
||||||
|
|
||||||
make_sig_test = t_factory(
|
make_sig_test = t_factory(
|
||||||
'signature', signature, re.compile(r'.*-(?P<id>[a-zA-Z0-9_-]+)(?:/watch_as3|/html5player)?\.[a-z]+$'))
|
'signature', signature, re.compile(r'.*(?:-|/player/)(?P<id>[a-zA-Z0-9_-]+)(?:/.+\.js|(?:/watch_as3|/html5player)?\.[a-z]+)$'))
|
||||||
for test_spec in _SIG_TESTS:
|
for test_spec in _SIG_TESTS:
|
||||||
make_sig_test(*test_spec)
|
make_sig_test(*test_spec)
|
||||||
|
|
||||||
|
@ -13,6 +13,7 @@
|
|||||||
import random
|
import random
|
||||||
import re
|
import re
|
||||||
import shutil
|
import shutil
|
||||||
|
import string
|
||||||
import subprocess
|
import subprocess
|
||||||
import sys
|
import sys
|
||||||
import tempfile
|
import tempfile
|
||||||
@ -20,10 +21,9 @@
|
|||||||
import tokenize
|
import tokenize
|
||||||
import traceback
|
import traceback
|
||||||
import unicodedata
|
import unicodedata
|
||||||
import urllib.request
|
|
||||||
from string import ascii_letters
|
|
||||||
|
|
||||||
from .cache import Cache
|
from .cache import Cache
|
||||||
|
from .compat import urllib # isort: split
|
||||||
from .compat import compat_os_name, compat_shlex_quote
|
from .compat import compat_os_name, compat_shlex_quote
|
||||||
from .cookies import load_cookies
|
from .cookies import load_cookies
|
||||||
from .downloader import FFmpegFD, get_suitable_downloader, shorten_protocol_name
|
from .downloader import FFmpegFD, get_suitable_downloader, shorten_protocol_name
|
||||||
@ -124,7 +124,6 @@
|
|||||||
parse_filesize,
|
parse_filesize,
|
||||||
preferredencoding,
|
preferredencoding,
|
||||||
prepend_extension,
|
prepend_extension,
|
||||||
register_socks_protocols,
|
|
||||||
remove_terminal_sequences,
|
remove_terminal_sequences,
|
||||||
render_table,
|
render_table,
|
||||||
replace_extension,
|
replace_extension,
|
||||||
@ -150,7 +149,7 @@
|
|||||||
write_json_file,
|
write_json_file,
|
||||||
write_string,
|
write_string,
|
||||||
)
|
)
|
||||||
from .version import RELEASE_GIT_HEAD, VARIANT, __version__
|
from .version import CHANNEL, RELEASE_GIT_HEAD, VARIANT, __version__
|
||||||
|
|
||||||
if compat_os_name == 'nt':
|
if compat_os_name == 'nt':
|
||||||
import ctypes
|
import ctypes
|
||||||
@ -190,6 +189,8 @@ class YoutubeDL:
|
|||||||
ap_username: Multiple-system operator account username.
|
ap_username: Multiple-system operator account username.
|
||||||
ap_password: Multiple-system operator account password.
|
ap_password: Multiple-system operator account password.
|
||||||
usenetrc: Use netrc for authentication instead.
|
usenetrc: Use netrc for authentication instead.
|
||||||
|
netrc_location: Location of the netrc file. Defaults to ~/.netrc.
|
||||||
|
netrc_cmd: Use a shell command to get credentials
|
||||||
verbose: Print additional info to stdout.
|
verbose: Print additional info to stdout.
|
||||||
quiet: Do not print messages to stdout.
|
quiet: Do not print messages to stdout.
|
||||||
no_warnings: Do not print out anything for warnings.
|
no_warnings: Do not print out anything for warnings.
|
||||||
@ -258,7 +259,7 @@ class YoutubeDL:
|
|||||||
consoletitle: Display progress in console window's titlebar.
|
consoletitle: Display progress in console window's titlebar.
|
||||||
writedescription: Write the video description to a .description file
|
writedescription: Write the video description to a .description file
|
||||||
writeinfojson: Write the video description to a .info.json file
|
writeinfojson: Write the video description to a .info.json file
|
||||||
clean_infojson: Remove private fields from the infojson
|
clean_infojson: Remove internal metadata from the infojson
|
||||||
getcomments: Extract video comments. This will not be written to disk
|
getcomments: Extract video comments. This will not be written to disk
|
||||||
unless writeinfojson is also given
|
unless writeinfojson is also given
|
||||||
writeannotations: Write the video annotations to a .annotations.xml file
|
writeannotations: Write the video annotations to a .annotations.xml file
|
||||||
@ -280,7 +281,7 @@ class YoutubeDL:
|
|||||||
subtitles. The language can be prefixed with a "-" to
|
subtitles. The language can be prefixed with a "-" to
|
||||||
exclude it from the requested languages, e.g. ['all', '-live_chat']
|
exclude it from the requested languages, e.g. ['all', '-live_chat']
|
||||||
keepvideo: Keep the video file after post-processing
|
keepvideo: Keep the video file after post-processing
|
||||||
daterange: A DateRange object, download only if the upload_date is in the range.
|
daterange: A utils.DateRange object, download only if the upload_date is in the range.
|
||||||
skip_download: Skip the actual download of the video file
|
skip_download: Skip the actual download of the video file
|
||||||
cachedir: Location of the cache files in the filesystem.
|
cachedir: Location of the cache files in the filesystem.
|
||||||
False to disable filesystem cache.
|
False to disable filesystem cache.
|
||||||
@ -300,8 +301,6 @@ class YoutubeDL:
|
|||||||
Videos already present in the file are not downloaded again.
|
Videos already present in the file are not downloaded again.
|
||||||
break_on_existing: Stop the download process after attempting to download a
|
break_on_existing: Stop the download process after attempting to download a
|
||||||
file that is in the archive.
|
file that is in the archive.
|
||||||
break_on_reject: Stop the download process when encountering a video that
|
|
||||||
has been filtered out.
|
|
||||||
break_per_url: Whether break_on_reject and break_on_existing
|
break_per_url: Whether break_on_reject and break_on_existing
|
||||||
should act on each input URL as opposed to for the entire queue
|
should act on each input URL as opposed to for the entire queue
|
||||||
cookiefile: File name or text stream from where cookies should be read and dumped to
|
cookiefile: File name or text stream from where cookies should be read and dumped to
|
||||||
@ -331,13 +330,13 @@ class YoutubeDL:
|
|||||||
'auto' for elaborate guessing
|
'auto' for elaborate guessing
|
||||||
encoding: Use this encoding instead of the system-specified.
|
encoding: Use this encoding instead of the system-specified.
|
||||||
extract_flat: Whether to resolve and process url_results further
|
extract_flat: Whether to resolve and process url_results further
|
||||||
* False: Always process (default)
|
* False: Always process. Default for API
|
||||||
* True: Never process
|
* True: Never process
|
||||||
* 'in_playlist': Do not process inside playlist/multi_video
|
* 'in_playlist': Do not process inside playlist/multi_video
|
||||||
* 'discard': Always process, but don't return the result
|
* 'discard': Always process, but don't return the result
|
||||||
from inside playlist/multi_video
|
from inside playlist/multi_video
|
||||||
* 'discard_in_playlist': Same as "discard", but only for
|
* 'discard_in_playlist': Same as "discard", but only for
|
||||||
playlists (not multi_video)
|
playlists (not multi_video). Default for CLI
|
||||||
wait_for_video: If given, wait for scheduled streams to become available.
|
wait_for_video: If given, wait for scheduled streams to become available.
|
||||||
The value should be a tuple containing the range
|
The value should be a tuple containing the range
|
||||||
(min_secs, max_secs) to wait between retries
|
(min_secs, max_secs) to wait between retries
|
||||||
@ -414,8 +413,15 @@ class YoutubeDL:
|
|||||||
- If it returns None, the video is downloaded.
|
- If it returns None, the video is downloaded.
|
||||||
- If it returns utils.NO_DEFAULT, the user is interactively
|
- If it returns utils.NO_DEFAULT, the user is interactively
|
||||||
asked whether to download the video.
|
asked whether to download the video.
|
||||||
|
- Raise utils.DownloadCancelled(msg) to abort remaining
|
||||||
|
downloads when a video is rejected.
|
||||||
match_filter_func in utils.py is one example for this.
|
match_filter_func in utils.py is one example for this.
|
||||||
no_color: Do not emit color codes in output.
|
color: A Dictionary with output stream names as keys
|
||||||
|
and their respective color policy as values.
|
||||||
|
Can also just be a single color policy,
|
||||||
|
in which case it applies to all outputs.
|
||||||
|
Valid stream names are 'stdout' and 'stderr'.
|
||||||
|
Valid color policies are one of 'always', 'auto', 'no_color' or 'never'.
|
||||||
geo_bypass: Bypass geographic restriction via faking X-Forwarded-For
|
geo_bypass: Bypass geographic restriction via faking X-Forwarded-For
|
||||||
HTTP header
|
HTTP header
|
||||||
geo_bypass_country:
|
geo_bypass_country:
|
||||||
@ -472,7 +478,7 @@ class YoutubeDL:
|
|||||||
can also be used
|
can also be used
|
||||||
|
|
||||||
The following options are used by the extractors:
|
The following options are used by the extractors:
|
||||||
extractor_retries: Number of times to retry for known errors
|
extractor_retries: Number of times to retry for known errors (default: 3)
|
||||||
dynamic_mpd: Whether to process dynamic DASH manifests (default: True)
|
dynamic_mpd: Whether to process dynamic DASH manifests (default: True)
|
||||||
hls_split_discontinuity: Split HLS playlists to different formats at
|
hls_split_discontinuity: Split HLS playlists to different formats at
|
||||||
discontinuities such as ad breaks (default: False)
|
discontinuities such as ad breaks (default: False)
|
||||||
@ -483,6 +489,9 @@ class YoutubeDL:
|
|||||||
|
|
||||||
The following options are deprecated and may be removed in the future:
|
The following options are deprecated and may be removed in the future:
|
||||||
|
|
||||||
|
break_on_reject: Stop the download process when encountering a video that
|
||||||
|
has been filtered out.
|
||||||
|
- `raise DownloadCancelled(msg)` in match_filter instead
|
||||||
force_generic_extractor: Force downloader to use the generic extractor
|
force_generic_extractor: Force downloader to use the generic extractor
|
||||||
- Use allowed_extractors = ['generic', 'default']
|
- Use allowed_extractors = ['generic', 'default']
|
||||||
playliststart: - Use playlist_items
|
playliststart: - Use playlist_items
|
||||||
@ -534,6 +543,7 @@ class YoutubeDL:
|
|||||||
data will be downloaded and processed by extractor.
|
data will be downloaded and processed by extractor.
|
||||||
You can reduce network I/O by disabling it if you don't
|
You can reduce network I/O by disabling it if you don't
|
||||||
care about HLS. (only for youtube)
|
care about HLS. (only for youtube)
|
||||||
|
no_color: Same as `color='no_color'`
|
||||||
"""
|
"""
|
||||||
|
|
||||||
_NUMERIC_FIELDS = {
|
_NUMERIC_FIELDS = {
|
||||||
@ -554,7 +564,7 @@ class YoutubeDL:
|
|||||||
'vbr', 'fps', 'vcodec', 'container', 'filesize', 'filesize_approx', 'rows', 'columns',
|
'vbr', 'fps', 'vcodec', 'container', 'filesize', 'filesize_approx', 'rows', 'columns',
|
||||||
'player_url', 'protocol', 'fragment_base_url', 'fragments', 'is_from_start',
|
'player_url', 'protocol', 'fragment_base_url', 'fragments', 'is_from_start',
|
||||||
'preference', 'language', 'language_preference', 'quality', 'source_preference',
|
'preference', 'language', 'language_preference', 'quality', 'source_preference',
|
||||||
'http_headers', 'stretched_ratio', 'no_resume', 'has_drm', 'downloader_options',
|
'http_headers', 'stretched_ratio', 'no_resume', 'has_drm', 'extra_param_to_segment_url', 'hls_aes', 'downloader_options',
|
||||||
'page_url', 'app', 'play_path', 'tc_url', 'flash_version', 'rtmp_live', 'rtmp_conn', 'rtmp_protocol', 'rtmp_real_time'
|
'page_url', 'app', 'play_path', 'tc_url', 'flash_version', 'rtmp_live', 'rtmp_conn', 'rtmp_protocol', 'rtmp_real_time'
|
||||||
}
|
}
|
||||||
_format_selection_exts = {
|
_format_selection_exts = {
|
||||||
@ -600,9 +610,24 @@ def __init__(self, params=None, auto_init=True):
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
self.write_debug(f'Failed to enable VT mode: {e}')
|
self.write_debug(f'Failed to enable VT mode: {e}')
|
||||||
|
|
||||||
|
if self.params.get('no_color'):
|
||||||
|
if self.params.get('color') is not None:
|
||||||
|
self.report_warning('Overwriting params from "color" with "no_color"')
|
||||||
|
self.params['color'] = 'no_color'
|
||||||
|
|
||||||
|
term_allow_color = os.environ.get('TERM', '').lower() != 'dumb'
|
||||||
|
|
||||||
|
def process_color_policy(stream):
|
||||||
|
stream_name = {sys.stdout: 'stdout', sys.stderr: 'stderr'}[stream]
|
||||||
|
policy = traverse_obj(self.params, ('color', (stream_name, None), {str}), get_all=False)
|
||||||
|
if policy in ('auto', None):
|
||||||
|
return term_allow_color and supports_terminal_sequences(stream)
|
||||||
|
assert policy in ('always', 'never', 'no_color')
|
||||||
|
return {'always': True, 'never': False}.get(policy, policy)
|
||||||
|
|
||||||
self._allow_colors = Namespace(**{
|
self._allow_colors = Namespace(**{
|
||||||
type_: not self.params.get('no_color') and supports_terminal_sequences(stream)
|
name: process_color_policy(stream)
|
||||||
for type_, stream in self._out_files.items_ if type_ != 'console'
|
for name, stream in self._out_files.items_ if name != 'console'
|
||||||
})
|
})
|
||||||
|
|
||||||
# The code is left like this to be reused for future deprecations
|
# The code is left like this to be reused for future deprecations
|
||||||
@ -614,7 +639,7 @@ def __init__(self, params=None, auto_init=True):
|
|||||||
'\n You will no longer receive updates on this version')
|
'\n You will no longer receive updates on this version')
|
||||||
if current_version < MIN_SUPPORTED:
|
if current_version < MIN_SUPPORTED:
|
||||||
msg = 'Python version %d.%d is no longer supported'
|
msg = 'Python version %d.%d is no longer supported'
|
||||||
self.deprecation_warning(
|
self.deprecated_feature(
|
||||||
f'{msg}! Please update to Python %d.%d or above' % (*current_version, *MIN_RECOMMENDED))
|
f'{msg}! Please update to Python %d.%d or above' % (*current_version, *MIN_RECOMMENDED))
|
||||||
|
|
||||||
if self.params.get('allow_unplayable_formats'):
|
if self.params.get('allow_unplayable_formats'):
|
||||||
@ -735,7 +760,6 @@ def check_deprecated(param, option, suggestion):
|
|||||||
when=when)
|
when=when)
|
||||||
|
|
||||||
self._setup_opener()
|
self._setup_opener()
|
||||||
register_socks_protocols()
|
|
||||||
|
|
||||||
def preload_download_archive(fn):
|
def preload_download_archive(fn):
|
||||||
"""Preload the archive, if any is specified"""
|
"""Preload the archive, if any is specified"""
|
||||||
@ -972,7 +996,7 @@ def _format_text(self, handle, allow_colors, text, f, fallback=None, *, test_enc
|
|||||||
text = text.encode(encoding, 'ignore').decode(encoding)
|
text = text.encode(encoding, 'ignore').decode(encoding)
|
||||||
if fallback is not None and text != original_text:
|
if fallback is not None and text != original_text:
|
||||||
text = fallback
|
text = fallback
|
||||||
return format_text(text, f) if allow_colors else text if fallback is None else fallback
|
return format_text(text, f) if allow_colors is True else text if fallback is None else fallback
|
||||||
|
|
||||||
def _format_out(self, *args, **kwargs):
|
def _format_out(self, *args, **kwargs):
|
||||||
return self._format_text(self._out_files.out, self._allow_colors.out, *args, **kwargs)
|
return self._format_text(self._out_files.out, self._allow_colors.out, *args, **kwargs)
|
||||||
@ -1075,7 +1099,7 @@ def _outtmpl_expandpath(outtmpl):
|
|||||||
# correspondingly that is not what we want since we need to keep
|
# correspondingly that is not what we want since we need to keep
|
||||||
# '%%' intact for template dict substitution step. Working around
|
# '%%' intact for template dict substitution step. Working around
|
||||||
# with boundary-alike separator hack.
|
# with boundary-alike separator hack.
|
||||||
sep = ''.join(random.choices(ascii_letters, k=32))
|
sep = ''.join(random.choices(string.ascii_letters, k=32))
|
||||||
outtmpl = outtmpl.replace('%%', f'%{sep}%').replace('$$', f'${sep}$')
|
outtmpl = outtmpl.replace('%%', f'%{sep}%').replace('$$', f'${sep}$')
|
||||||
|
|
||||||
# outtmpl should be expand_path'ed before template dict substitution
|
# outtmpl should be expand_path'ed before template dict substitution
|
||||||
@ -1153,7 +1177,7 @@ def prepare_outtmpl(self, outtmpl, info_dict, sanitize=False):
|
|||||||
}
|
}
|
||||||
MATH_FIELD_RE = rf'(?:{FIELD_RE}|-?{NUMBER_RE})'
|
MATH_FIELD_RE = rf'(?:{FIELD_RE}|-?{NUMBER_RE})'
|
||||||
MATH_OPERATORS_RE = r'(?:%s)' % '|'.join(map(re.escape, MATH_FUNCTIONS.keys()))
|
MATH_OPERATORS_RE = r'(?:%s)' % '|'.join(map(re.escape, MATH_FUNCTIONS.keys()))
|
||||||
INTERNAL_FORMAT_RE = re.compile(rf'''(?x)
|
INTERNAL_FORMAT_RE = re.compile(rf'''(?xs)
|
||||||
(?P<negate>-)?
|
(?P<negate>-)?
|
||||||
(?P<fields>{FIELD_RE})
|
(?P<fields>{FIELD_RE})
|
||||||
(?P<maths>(?:{MATH_OPERATORS_RE}{MATH_FIELD_RE})*)
|
(?P<maths>(?:{MATH_OPERATORS_RE}{MATH_FIELD_RE})*)
|
||||||
@ -1234,6 +1258,14 @@ def _dumpjson_default(obj):
|
|||||||
return list(obj)
|
return list(obj)
|
||||||
return repr(obj)
|
return repr(obj)
|
||||||
|
|
||||||
|
class _ReplacementFormatter(string.Formatter):
|
||||||
|
def get_field(self, field_name, args, kwargs):
|
||||||
|
if field_name.isdigit():
|
||||||
|
return args[0], -1
|
||||||
|
raise ValueError('Unsupported field')
|
||||||
|
|
||||||
|
replacement_formatter = _ReplacementFormatter()
|
||||||
|
|
||||||
def create_key(outer_mobj):
|
def create_key(outer_mobj):
|
||||||
if not outer_mobj.group('has_key'):
|
if not outer_mobj.group('has_key'):
|
||||||
return outer_mobj.group(0)
|
return outer_mobj.group(0)
|
||||||
@ -1255,11 +1287,17 @@ def create_key(outer_mobj):
|
|||||||
if fmt == 's' and value is not None and key in field_size_compat_map.keys():
|
if fmt == 's' and value is not None and key in field_size_compat_map.keys():
|
||||||
fmt = f'0{field_size_compat_map[key]:d}d'
|
fmt = f'0{field_size_compat_map[key]:d}d'
|
||||||
|
|
||||||
value = default if value is None else value if replacement is None else replacement
|
if None not in (value, replacement):
|
||||||
|
try:
|
||||||
|
value = replacement_formatter.format(replacement, value)
|
||||||
|
except ValueError:
|
||||||
|
value, default = None, na
|
||||||
|
|
||||||
flags = outer_mobj.group('conversion') or ''
|
flags = outer_mobj.group('conversion') or ''
|
||||||
str_fmt = f'{fmt[:-1]}s'
|
str_fmt = f'{fmt[:-1]}s'
|
||||||
if fmt[-1] == 'l': # list
|
if value is None:
|
||||||
|
value, fmt = default, 's'
|
||||||
|
elif fmt[-1] == 'l': # list
|
||||||
delim = '\n' if '#' in flags else ', '
|
delim = '\n' if '#' in flags else ', '
|
||||||
value, fmt = delim.join(map(str, variadic(value, allowed_types=(str, bytes)))), str_fmt
|
value, fmt = delim.join(map(str, variadic(value, allowed_types=(str, bytes)))), str_fmt
|
||||||
elif fmt[-1] == 'j': # json
|
elif fmt[-1] == 'j': # json
|
||||||
@ -1290,17 +1328,19 @@ def create_key(outer_mobj):
|
|||||||
value = str(value)[0]
|
value = str(value)[0]
|
||||||
else:
|
else:
|
||||||
fmt = str_fmt
|
fmt = str_fmt
|
||||||
elif fmt[-1] not in 'rs': # numeric
|
elif fmt[-1] not in 'rsa': # numeric
|
||||||
value = float_or_none(value)
|
value = float_or_none(value)
|
||||||
if value is None:
|
if value is None:
|
||||||
value, fmt = default, 's'
|
value, fmt = default, 's'
|
||||||
|
|
||||||
if sanitize:
|
if sanitize:
|
||||||
|
# If value is an object, sanitize might convert it to a string
|
||||||
|
# So we convert it to repr first
|
||||||
if fmt[-1] == 'r':
|
if fmt[-1] == 'r':
|
||||||
# If value is an object, sanitize might convert it to a string
|
|
||||||
# So we convert it to repr first
|
|
||||||
value, fmt = repr(value), str_fmt
|
value, fmt = repr(value), str_fmt
|
||||||
if fmt[-1] in 'csr':
|
elif fmt[-1] == 'a':
|
||||||
|
value, fmt = ascii(value), str_fmt
|
||||||
|
if fmt[-1] in 'csra':
|
||||||
value = sanitizer(initial_field, value)
|
value = sanitizer(initial_field, value)
|
||||||
|
|
||||||
key = '%s\0%s' % (key.replace('%', '%\0'), outer_mobj.group('format'))
|
key = '%s\0%s' % (key.replace('%', '%\0'), outer_mobj.group('format'))
|
||||||
@ -1366,7 +1406,7 @@ def prepare_filename(self, info_dict, dir_type='', *, outtmpl=None, warn=False):
|
|||||||
|
|
||||||
def _match_entry(self, info_dict, incomplete=False, silent=False):
|
def _match_entry(self, info_dict, incomplete=False, silent=False):
|
||||||
"""Returns None if the file should be downloaded"""
|
"""Returns None if the file should be downloaded"""
|
||||||
_type = info_dict.get('_type', 'video')
|
_type = 'video' if 'playlist-match-filter' in self.params['compat_opts'] else info_dict.get('_type', 'video')
|
||||||
assert incomplete or _type == 'video', 'Only video result can be considered complete'
|
assert incomplete or _type == 'video', 'Only video result can be considered complete'
|
||||||
|
|
||||||
video_title = info_dict.get('title', info_dict.get('id', 'entry'))
|
video_title = info_dict.get('title', info_dict.get('id', 'entry'))
|
||||||
@ -1407,31 +1447,44 @@ def check_filter():
|
|||||||
return 'Skipping "%s" because it is age restricted' % video_title
|
return 'Skipping "%s" because it is age restricted' % video_title
|
||||||
|
|
||||||
match_filter = self.params.get('match_filter')
|
match_filter = self.params.get('match_filter')
|
||||||
if match_filter is not None:
|
if match_filter is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
cancelled = None
|
||||||
|
try:
|
||||||
try:
|
try:
|
||||||
ret = match_filter(info_dict, incomplete=incomplete)
|
ret = match_filter(info_dict, incomplete=incomplete)
|
||||||
except TypeError:
|
except TypeError:
|
||||||
# For backward compatibility
|
# For backward compatibility
|
||||||
ret = None if incomplete else match_filter(info_dict)
|
ret = None if incomplete else match_filter(info_dict)
|
||||||
if ret is NO_DEFAULT:
|
except DownloadCancelled as err:
|
||||||
while True:
|
if err.msg is not NO_DEFAULT:
|
||||||
filename = self._format_screen(self.prepare_filename(info_dict), self.Styles.FILENAME)
|
raise
|
||||||
reply = input(self._format_screen(
|
ret, cancelled = err.msg, err
|
||||||
f'Download "{filename}"? (Y/n): ', self.Styles.EMPHASIS)).lower().strip()
|
|
||||||
if reply in {'y', ''}:
|
if ret is NO_DEFAULT:
|
||||||
return None
|
while True:
|
||||||
elif reply == 'n':
|
filename = self._format_screen(self.prepare_filename(info_dict), self.Styles.FILENAME)
|
||||||
return f'Skipping {video_title}'
|
reply = input(self._format_screen(
|
||||||
elif ret is not None:
|
f'Download "{filename}"? (Y/n): ', self.Styles.EMPHASIS)).lower().strip()
|
||||||
return ret
|
if reply in {'y', ''}:
|
||||||
return None
|
return None
|
||||||
|
elif reply == 'n':
|
||||||
|
if cancelled:
|
||||||
|
raise type(cancelled)(f'Skipping {video_title}')
|
||||||
|
return f'Skipping {video_title}'
|
||||||
|
return ret
|
||||||
|
|
||||||
if self.in_download_archive(info_dict):
|
if self.in_download_archive(info_dict):
|
||||||
reason = '%s has already been recorded in the archive' % video_title
|
reason = '%s has already been recorded in the archive' % video_title
|
||||||
break_opt, break_err = 'break_on_existing', ExistingVideoReached
|
break_opt, break_err = 'break_on_existing', ExistingVideoReached
|
||||||
else:
|
else:
|
||||||
reason = check_filter()
|
try:
|
||||||
break_opt, break_err = 'break_on_reject', RejectedVideoReached
|
reason = check_filter()
|
||||||
|
except DownloadCancelled as e:
|
||||||
|
reason, break_opt, break_err = e.msg, 'match_filter', type(e)
|
||||||
|
else:
|
||||||
|
break_opt, break_err = 'break_on_reject', RejectedVideoReached
|
||||||
if reason is not None:
|
if reason is not None:
|
||||||
if not silent:
|
if not silent:
|
||||||
self.to_screen('[download] ' + reason)
|
self.to_screen('[download] ' + reason)
|
||||||
@ -1647,7 +1700,7 @@ def process_ie_result(self, ie_result, download=True, extra_info=None):
|
|||||||
self.add_extra_info(info_copy, extra_info)
|
self.add_extra_info(info_copy, extra_info)
|
||||||
info_copy, _ = self.pre_process(info_copy)
|
info_copy, _ = self.pre_process(info_copy)
|
||||||
self._fill_common_fields(info_copy, False)
|
self._fill_common_fields(info_copy, False)
|
||||||
self.__forced_printings(info_copy, self.prepare_filename(info_copy), incomplete=True)
|
self.__forced_printings(info_copy)
|
||||||
self._raise_pending_errors(info_copy)
|
self._raise_pending_errors(info_copy)
|
||||||
if self.params.get('force_write_download_archive', False):
|
if self.params.get('force_write_download_archive', False):
|
||||||
self.record_download_archive(info_copy)
|
self.record_download_archive(info_copy)
|
||||||
@ -1777,7 +1830,7 @@ def _playlist_infodict(ie_result, strict=False, **kwargs):
|
|||||||
return {
|
return {
|
||||||
**info,
|
**info,
|
||||||
'playlist_index': 0,
|
'playlist_index': 0,
|
||||||
'__last_playlist_index': max(ie_result['requested_entries'] or (0, 0)),
|
'__last_playlist_index': max(ie_result.get('requested_entries') or (0, 0)),
|
||||||
'extractor': ie_result['extractor'],
|
'extractor': ie_result['extractor'],
|
||||||
'extractor_key': ie_result['extractor_key'],
|
'extractor_key': ie_result['extractor_key'],
|
||||||
}
|
}
|
||||||
@ -1851,7 +1904,7 @@ def __process_playlist(self, ie_result, download):
|
|||||||
continue
|
continue
|
||||||
|
|
||||||
entry['__x_forwarded_for_ip'] = ie_result.get('__x_forwarded_for_ip')
|
entry['__x_forwarded_for_ip'] = ie_result.get('__x_forwarded_for_ip')
|
||||||
if not lazy and 'playlist-index' in self.params.get('compat_opts', []):
|
if not lazy and 'playlist-index' in self.params['compat_opts']:
|
||||||
playlist_index = ie_result['requested_entries'][i]
|
playlist_index = ie_result['requested_entries'][i]
|
||||||
|
|
||||||
entry_copy = collections.ChainMap(entry, {
|
entry_copy = collections.ChainMap(entry, {
|
||||||
@ -1916,7 +1969,7 @@ def _build_format_filter(self, filter_spec):
|
|||||||
'!=': operator.ne,
|
'!=': operator.ne,
|
||||||
}
|
}
|
||||||
operator_rex = re.compile(r'''(?x)\s*
|
operator_rex = re.compile(r'''(?x)\s*
|
||||||
(?P<key>width|height|tbr|abr|vbr|asr|filesize|filesize_approx|fps)\s*
|
(?P<key>[\w.-]+)\s*
|
||||||
(?P<op>%s)(?P<none_inclusive>\s*\?)?\s*
|
(?P<op>%s)(?P<none_inclusive>\s*\?)?\s*
|
||||||
(?P<value>[0-9.]+(?:[kKmMgGtTpPeEzZyY]i?[Bb]?)?)\s*
|
(?P<value>[0-9.]+(?:[kKmMgGtTpPeEzZyY]i?[Bb]?)?)\s*
|
||||||
''' % '|'.join(map(re.escape, OPERATORS.keys())))
|
''' % '|'.join(map(re.escape, OPERATORS.keys())))
|
||||||
@ -2037,86 +2090,86 @@ def syntax_error(note, start):
|
|||||||
|
|
||||||
def _parse_filter(tokens):
|
def _parse_filter(tokens):
|
||||||
filter_parts = []
|
filter_parts = []
|
||||||
for type, string, start, _, _ in tokens:
|
for type, string_, start, _, _ in tokens:
|
||||||
if type == tokenize.OP and string == ']':
|
if type == tokenize.OP and string_ == ']':
|
||||||
return ''.join(filter_parts)
|
return ''.join(filter_parts)
|
||||||
else:
|
else:
|
||||||
filter_parts.append(string)
|
filter_parts.append(string_)
|
||||||
|
|
||||||
def _remove_unused_ops(tokens):
|
def _remove_unused_ops(tokens):
|
||||||
# Remove operators that we don't use and join them with the surrounding strings.
|
# Remove operators that we don't use and join them with the surrounding strings.
|
||||||
# E.g. 'mp4' '-' 'baseline' '-' '16x9' is converted to 'mp4-baseline-16x9'
|
# E.g. 'mp4' '-' 'baseline' '-' '16x9' is converted to 'mp4-baseline-16x9'
|
||||||
ALLOWED_OPS = ('/', '+', ',', '(', ')')
|
ALLOWED_OPS = ('/', '+', ',', '(', ')')
|
||||||
last_string, last_start, last_end, last_line = None, None, None, None
|
last_string, last_start, last_end, last_line = None, None, None, None
|
||||||
for type, string, start, end, line in tokens:
|
for type, string_, start, end, line in tokens:
|
||||||
if type == tokenize.OP and string == '[':
|
if type == tokenize.OP and string_ == '[':
|
||||||
if last_string:
|
if last_string:
|
||||||
yield tokenize.NAME, last_string, last_start, last_end, last_line
|
yield tokenize.NAME, last_string, last_start, last_end, last_line
|
||||||
last_string = None
|
last_string = None
|
||||||
yield type, string, start, end, line
|
yield type, string_, start, end, line
|
||||||
# everything inside brackets will be handled by _parse_filter
|
# everything inside brackets will be handled by _parse_filter
|
||||||
for type, string, start, end, line in tokens:
|
for type, string_, start, end, line in tokens:
|
||||||
yield type, string, start, end, line
|
yield type, string_, start, end, line
|
||||||
if type == tokenize.OP and string == ']':
|
if type == tokenize.OP and string_ == ']':
|
||||||
break
|
break
|
||||||
elif type == tokenize.OP and string in ALLOWED_OPS:
|
elif type == tokenize.OP and string_ in ALLOWED_OPS:
|
||||||
if last_string:
|
if last_string:
|
||||||
yield tokenize.NAME, last_string, last_start, last_end, last_line
|
yield tokenize.NAME, last_string, last_start, last_end, last_line
|
||||||
last_string = None
|
last_string = None
|
||||||
yield type, string, start, end, line
|
yield type, string_, start, end, line
|
||||||
elif type in [tokenize.NAME, tokenize.NUMBER, tokenize.OP]:
|
elif type in [tokenize.NAME, tokenize.NUMBER, tokenize.OP]:
|
||||||
if not last_string:
|
if not last_string:
|
||||||
last_string = string
|
last_string = string_
|
||||||
last_start = start
|
last_start = start
|
||||||
last_end = end
|
last_end = end
|
||||||
else:
|
else:
|
||||||
last_string += string
|
last_string += string_
|
||||||
if last_string:
|
if last_string:
|
||||||
yield tokenize.NAME, last_string, last_start, last_end, last_line
|
yield tokenize.NAME, last_string, last_start, last_end, last_line
|
||||||
|
|
||||||
def _parse_format_selection(tokens, inside_merge=False, inside_choice=False, inside_group=False):
|
def _parse_format_selection(tokens, inside_merge=False, inside_choice=False, inside_group=False):
|
||||||
selectors = []
|
selectors = []
|
||||||
current_selector = None
|
current_selector = None
|
||||||
for type, string, start, _, _ in tokens:
|
for type, string_, start, _, _ in tokens:
|
||||||
# ENCODING is only defined in python 3.x
|
# ENCODING is only defined in python 3.x
|
||||||
if type == getattr(tokenize, 'ENCODING', None):
|
if type == getattr(tokenize, 'ENCODING', None):
|
||||||
continue
|
continue
|
||||||
elif type in [tokenize.NAME, tokenize.NUMBER]:
|
elif type in [tokenize.NAME, tokenize.NUMBER]:
|
||||||
current_selector = FormatSelector(SINGLE, string, [])
|
current_selector = FormatSelector(SINGLE, string_, [])
|
||||||
elif type == tokenize.OP:
|
elif type == tokenize.OP:
|
||||||
if string == ')':
|
if string_ == ')':
|
||||||
if not inside_group:
|
if not inside_group:
|
||||||
# ')' will be handled by the parentheses group
|
# ')' will be handled by the parentheses group
|
||||||
tokens.restore_last_token()
|
tokens.restore_last_token()
|
||||||
break
|
break
|
||||||
elif inside_merge and string in ['/', ',']:
|
elif inside_merge and string_ in ['/', ',']:
|
||||||
tokens.restore_last_token()
|
tokens.restore_last_token()
|
||||||
break
|
break
|
||||||
elif inside_choice and string == ',':
|
elif inside_choice and string_ == ',':
|
||||||
tokens.restore_last_token()
|
tokens.restore_last_token()
|
||||||
break
|
break
|
||||||
elif string == ',':
|
elif string_ == ',':
|
||||||
if not current_selector:
|
if not current_selector:
|
||||||
raise syntax_error('"," must follow a format selector', start)
|
raise syntax_error('"," must follow a format selector', start)
|
||||||
selectors.append(current_selector)
|
selectors.append(current_selector)
|
||||||
current_selector = None
|
current_selector = None
|
||||||
elif string == '/':
|
elif string_ == '/':
|
||||||
if not current_selector:
|
if not current_selector:
|
||||||
raise syntax_error('"/" must follow a format selector', start)
|
raise syntax_error('"/" must follow a format selector', start)
|
||||||
first_choice = current_selector
|
first_choice = current_selector
|
||||||
second_choice = _parse_format_selection(tokens, inside_choice=True)
|
second_choice = _parse_format_selection(tokens, inside_choice=True)
|
||||||
current_selector = FormatSelector(PICKFIRST, (first_choice, second_choice), [])
|
current_selector = FormatSelector(PICKFIRST, (first_choice, second_choice), [])
|
||||||
elif string == '[':
|
elif string_ == '[':
|
||||||
if not current_selector:
|
if not current_selector:
|
||||||
current_selector = FormatSelector(SINGLE, 'best', [])
|
current_selector = FormatSelector(SINGLE, 'best', [])
|
||||||
format_filter = _parse_filter(tokens)
|
format_filter = _parse_filter(tokens)
|
||||||
current_selector.filters.append(format_filter)
|
current_selector.filters.append(format_filter)
|
||||||
elif string == '(':
|
elif string_ == '(':
|
||||||
if current_selector:
|
if current_selector:
|
||||||
raise syntax_error('Unexpected "("', start)
|
raise syntax_error('Unexpected "("', start)
|
||||||
group = _parse_format_selection(tokens, inside_group=True)
|
group = _parse_format_selection(tokens, inside_group=True)
|
||||||
current_selector = FormatSelector(GROUP, group, [])
|
current_selector = FormatSelector(GROUP, group, [])
|
||||||
elif string == '+':
|
elif string_ == '+':
|
||||||
if not current_selector:
|
if not current_selector:
|
||||||
raise syntax_error('Unexpected "+"', start)
|
raise syntax_error('Unexpected "+"', start)
|
||||||
selector_1 = current_selector
|
selector_1 = current_selector
|
||||||
@ -2125,7 +2178,7 @@ def _parse_format_selection(tokens, inside_merge=False, inside_choice=False, ins
|
|||||||
raise syntax_error('Expected a selector', start)
|
raise syntax_error('Expected a selector', start)
|
||||||
current_selector = FormatSelector(MERGE, (selector_1, selector_2), [])
|
current_selector = FormatSelector(MERGE, (selector_1, selector_2), [])
|
||||||
else:
|
else:
|
||||||
raise syntax_error(f'Operator not recognized: "{string}"', start)
|
raise syntax_error(f'Operator not recognized: "{string_}"', start)
|
||||||
elif type == tokenize.ENDMARKER:
|
elif type == tokenize.ENDMARKER:
|
||||||
break
|
break
|
||||||
if current_selector:
|
if current_selector:
|
||||||
@ -2351,8 +2404,10 @@ def restore_last_token(self):
|
|||||||
|
|
||||||
def _calc_headers(self, info_dict):
|
def _calc_headers(self, info_dict):
|
||||||
res = merge_headers(self.params['http_headers'], info_dict.get('http_headers') or {})
|
res = merge_headers(self.params['http_headers'], info_dict.get('http_headers') or {})
|
||||||
|
if 'Youtubedl-No-Compression' in res: # deprecated
|
||||||
cookies = self._calc_cookies(info_dict['url'])
|
res.pop('Youtubedl-No-Compression', None)
|
||||||
|
res['Accept-Encoding'] = 'identity'
|
||||||
|
cookies = self.cookiejar.get_cookie_header(info_dict['url'])
|
||||||
if cookies:
|
if cookies:
|
||||||
res['Cookie'] = cookies
|
res['Cookie'] = cookies
|
||||||
|
|
||||||
@ -2364,9 +2419,8 @@ def _calc_headers(self, info_dict):
|
|||||||
return res
|
return res
|
||||||
|
|
||||||
def _calc_cookies(self, url):
|
def _calc_cookies(self, url):
|
||||||
pr = sanitized_Request(url)
|
self.deprecation_warning('"YoutubeDL._calc_cookies" is deprecated and may be removed in a future version')
|
||||||
self.cookiejar.add_cookie_header(pr)
|
return self.cookiejar.get_cookie_header(url)
|
||||||
return pr.get_header('Cookie')
|
|
||||||
|
|
||||||
def _sort_thumbnails(self, thumbnails):
|
def _sort_thumbnails(self, thumbnails):
|
||||||
thumbnails.sort(key=lambda t: (
|
thumbnails.sort(key=lambda t: (
|
||||||
@ -2411,11 +2465,7 @@ def check_thumbnails(thumbnails):
|
|||||||
def _fill_common_fields(self, info_dict, final=True):
|
def _fill_common_fields(self, info_dict, final=True):
|
||||||
# TODO: move sanitization here
|
# TODO: move sanitization here
|
||||||
if final:
|
if final:
|
||||||
title = info_dict.get('title', NO_DEFAULT)
|
title = info_dict['fulltitle'] = info_dict.get('title')
|
||||||
if title is NO_DEFAULT:
|
|
||||||
raise ExtractorError('Missing "title" field in extractor result',
|
|
||||||
video_id=info_dict['id'], ie=info_dict['extractor'])
|
|
||||||
info_dict['fulltitle'] = title
|
|
||||||
if not title:
|
if not title:
|
||||||
if title == '':
|
if title == '':
|
||||||
self.write_debug('Extractor gave empty title. Creating a generic title')
|
self.write_debug('Extractor gave empty title. Creating a generic title')
|
||||||
@ -2470,15 +2520,8 @@ def _raise_pending_errors(self, info):
|
|||||||
|
|
||||||
def sort_formats(self, info_dict):
|
def sort_formats(self, info_dict):
|
||||||
formats = self._get_formats(info_dict)
|
formats = self._get_formats(info_dict)
|
||||||
if not formats:
|
|
||||||
return
|
|
||||||
# Backward compatibility with InfoExtractor._sort_formats
|
|
||||||
field_preference = formats[0].pop('__sort_fields', None)
|
|
||||||
if field_preference:
|
|
||||||
info_dict['_format_sort_fields'] = field_preference
|
|
||||||
|
|
||||||
formats.sort(key=FormatSorter(
|
formats.sort(key=FormatSorter(
|
||||||
self, info_dict.get('_format_sort_fields', [])).calculate_preference)
|
self, info_dict.get('_format_sort_fields') or []).calculate_preference)
|
||||||
|
|
||||||
def process_video_result(self, info_dict, download=True):
|
def process_video_result(self, info_dict, download=True):
|
||||||
assert info_dict.get('_type', 'video') == 'video'
|
assert info_dict.get('_type', 'video') == 'video'
|
||||||
@ -2565,9 +2608,13 @@ def sanitize_numeric_fields(info):
|
|||||||
info_dict['requested_subtitles'] = self.process_subtitles(
|
info_dict['requested_subtitles'] = self.process_subtitles(
|
||||||
info_dict['id'], subtitles, automatic_captions)
|
info_dict['id'], subtitles, automatic_captions)
|
||||||
|
|
||||||
self.sort_formats(info_dict)
|
|
||||||
formats = self._get_formats(info_dict)
|
formats = self._get_formats(info_dict)
|
||||||
|
|
||||||
|
# Backward compatibility with InfoExtractor._sort_formats
|
||||||
|
field_preference = (formats or [{}])[0].pop('__sort_fields', None)
|
||||||
|
if field_preference:
|
||||||
|
info_dict['_format_sort_fields'] = field_preference
|
||||||
|
|
||||||
# or None ensures --clean-infojson removes it
|
# or None ensures --clean-infojson removes it
|
||||||
info_dict['_has_drm'] = any(f.get('has_drm') for f in formats) or None
|
info_dict['_has_drm'] = any(f.get('has_drm') for f in formats) or None
|
||||||
if not self.params.get('allow_unplayable_formats'):
|
if not self.params.get('allow_unplayable_formats'):
|
||||||
@ -2605,44 +2652,12 @@ def is_wellformed(f):
|
|||||||
if not formats:
|
if not formats:
|
||||||
self.raise_no_formats(info_dict)
|
self.raise_no_formats(info_dict)
|
||||||
|
|
||||||
formats_dict = {}
|
for format in formats:
|
||||||
|
|
||||||
# We check that all the formats have the format and format_id fields
|
|
||||||
for i, format in enumerate(formats):
|
|
||||||
sanitize_string_field(format, 'format_id')
|
sanitize_string_field(format, 'format_id')
|
||||||
sanitize_numeric_fields(format)
|
sanitize_numeric_fields(format)
|
||||||
format['url'] = sanitize_url(format['url'])
|
format['url'] = sanitize_url(format['url'])
|
||||||
if not format.get('format_id'):
|
if format.get('ext') is None:
|
||||||
format['format_id'] = str(i)
|
format['ext'] = determine_ext(format['url']).lower()
|
||||||
else:
|
|
||||||
# Sanitize format_id from characters used in format selector expression
|
|
||||||
format['format_id'] = re.sub(r'[\s,/+\[\]()]', '_', format['format_id'])
|
|
||||||
format_id = format['format_id']
|
|
||||||
if format_id not in formats_dict:
|
|
||||||
formats_dict[format_id] = []
|
|
||||||
formats_dict[format_id].append(format)
|
|
||||||
|
|
||||||
# Make sure all formats have unique format_id
|
|
||||||
common_exts = set(itertools.chain(*self._format_selection_exts.values()))
|
|
||||||
for format_id, ambiguous_formats in formats_dict.items():
|
|
||||||
ambigious_id = len(ambiguous_formats) > 1
|
|
||||||
for i, format in enumerate(ambiguous_formats):
|
|
||||||
if ambigious_id:
|
|
||||||
format['format_id'] = '%s-%d' % (format_id, i)
|
|
||||||
if format.get('ext') is None:
|
|
||||||
format['ext'] = determine_ext(format['url']).lower()
|
|
||||||
# Ensure there is no conflict between id and ext in format selection
|
|
||||||
# See https://github.com/yt-dlp/yt-dlp/issues/1282
|
|
||||||
if format['format_id'] != format['ext'] and format['format_id'] in common_exts:
|
|
||||||
format['format_id'] = 'f%s' % format['format_id']
|
|
||||||
|
|
||||||
for i, format in enumerate(formats):
|
|
||||||
if format.get('format') is None:
|
|
||||||
format['format'] = '{id} - {res}{note}'.format(
|
|
||||||
id=format['format_id'],
|
|
||||||
res=self.format_resolution(format),
|
|
||||||
note=format_field(format, 'format_note', ' (%s)'),
|
|
||||||
)
|
|
||||||
if format.get('protocol') is None:
|
if format.get('protocol') is None:
|
||||||
format['protocol'] = determine_protocol(format)
|
format['protocol'] = determine_protocol(format)
|
||||||
if format.get('resolution') is None:
|
if format.get('resolution') is None:
|
||||||
@ -2651,19 +2666,50 @@ def is_wellformed(f):
|
|||||||
format['dynamic_range'] = 'SDR'
|
format['dynamic_range'] = 'SDR'
|
||||||
if format.get('aspect_ratio') is None:
|
if format.get('aspect_ratio') is None:
|
||||||
format['aspect_ratio'] = try_call(lambda: round(format['width'] / format['height'], 2))
|
format['aspect_ratio'] = try_call(lambda: round(format['width'] / format['height'], 2))
|
||||||
if (info_dict.get('duration') and format.get('tbr')
|
if (not format.get('manifest_url') # For fragmented formats, "tbr" is often max bitrate and not average
|
||||||
|
and info_dict.get('duration') and format.get('tbr')
|
||||||
and not format.get('filesize') and not format.get('filesize_approx')):
|
and not format.get('filesize') and not format.get('filesize_approx')):
|
||||||
format['filesize_approx'] = int(info_dict['duration'] * format['tbr'] * (1024 / 8))
|
format['filesize_approx'] = int(info_dict['duration'] * format['tbr'] * (1024 / 8))
|
||||||
|
format['http_headers'] = self._calc_headers(collections.ChainMap(format, info_dict))
|
||||||
|
|
||||||
# Add HTTP headers, so that external programs can use them from the
|
# This is copied to http_headers by the above _calc_headers and can now be removed
|
||||||
# json output
|
|
||||||
full_format_info = info_dict.copy()
|
|
||||||
full_format_info.update(format)
|
|
||||||
format['http_headers'] = self._calc_headers(full_format_info)
|
|
||||||
# Remove private housekeeping stuff
|
|
||||||
if '__x_forwarded_for_ip' in info_dict:
|
if '__x_forwarded_for_ip' in info_dict:
|
||||||
del info_dict['__x_forwarded_for_ip']
|
del info_dict['__x_forwarded_for_ip']
|
||||||
|
|
||||||
|
self.sort_formats({
|
||||||
|
'formats': formats,
|
||||||
|
'_format_sort_fields': info_dict.get('_format_sort_fields')
|
||||||
|
})
|
||||||
|
|
||||||
|
# Sanitize and group by format_id
|
||||||
|
formats_dict = {}
|
||||||
|
for i, format in enumerate(formats):
|
||||||
|
if not format.get('format_id'):
|
||||||
|
format['format_id'] = str(i)
|
||||||
|
else:
|
||||||
|
# Sanitize format_id from characters used in format selector expression
|
||||||
|
format['format_id'] = re.sub(r'[\s,/+\[\]()]', '_', format['format_id'])
|
||||||
|
formats_dict.setdefault(format['format_id'], []).append(format)
|
||||||
|
|
||||||
|
# Make sure all formats have unique format_id
|
||||||
|
common_exts = set(itertools.chain(*self._format_selection_exts.values()))
|
||||||
|
for format_id, ambiguous_formats in formats_dict.items():
|
||||||
|
ambigious_id = len(ambiguous_formats) > 1
|
||||||
|
for i, format in enumerate(ambiguous_formats):
|
||||||
|
if ambigious_id:
|
||||||
|
format['format_id'] = '%s-%d' % (format_id, i)
|
||||||
|
# Ensure there is no conflict between id and ext in format selection
|
||||||
|
# See https://github.com/yt-dlp/yt-dlp/issues/1282
|
||||||
|
if format['format_id'] != format['ext'] and format['format_id'] in common_exts:
|
||||||
|
format['format_id'] = 'f%s' % format['format_id']
|
||||||
|
|
||||||
|
if format.get('format') is None:
|
||||||
|
format['format'] = '{id} - {res}{note}'.format(
|
||||||
|
id=format['format_id'],
|
||||||
|
res=self.format_resolution(format),
|
||||||
|
note=format_field(format, 'format_note', ' (%s)'),
|
||||||
|
)
|
||||||
|
|
||||||
if self.params.get('check_formats') is True:
|
if self.params.get('check_formats') is True:
|
||||||
formats = LazyList(self._check_formats(formats[::-1]), reverse=True)
|
formats = LazyList(self._check_formats(formats[::-1]), reverse=True)
|
||||||
|
|
||||||
@ -2698,25 +2744,26 @@ def is_wellformed(f):
|
|||||||
self.list_formats(info_dict)
|
self.list_formats(info_dict)
|
||||||
if list_only:
|
if list_only:
|
||||||
# Without this printing, -F --print-json will not work
|
# Without this printing, -F --print-json will not work
|
||||||
self.__forced_printings(info_dict, self.prepare_filename(info_dict), incomplete=True)
|
self.__forced_printings(info_dict)
|
||||||
return info_dict
|
return info_dict
|
||||||
|
|
||||||
format_selector = self.format_selector
|
format_selector = self.format_selector
|
||||||
if format_selector is None:
|
|
||||||
req_format = self._default_format_spec(info_dict, download=download)
|
|
||||||
self.write_debug('Default format spec: %s' % req_format)
|
|
||||||
format_selector = self.build_format_selector(req_format)
|
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
if interactive_format_selection:
|
if interactive_format_selection:
|
||||||
req_format = input(
|
req_format = input(self._format_screen('\nEnter format selector ', self.Styles.EMPHASIS)
|
||||||
self._format_screen('\nEnter format selector: ', self.Styles.EMPHASIS))
|
+ '(Press ENTER for default, or Ctrl+C to quit)'
|
||||||
|
+ self._format_screen(': ', self.Styles.EMPHASIS))
|
||||||
try:
|
try:
|
||||||
format_selector = self.build_format_selector(req_format)
|
format_selector = self.build_format_selector(req_format) if req_format else None
|
||||||
except SyntaxError as err:
|
except SyntaxError as err:
|
||||||
self.report_error(err, tb=False, is_error=False)
|
self.report_error(err, tb=False, is_error=False)
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
if format_selector is None:
|
||||||
|
req_format = self._default_format_spec(info_dict, download=download)
|
||||||
|
self.write_debug(f'Default format spec: {req_format}')
|
||||||
|
format_selector = self.build_format_selector(req_format)
|
||||||
|
|
||||||
formats_to_download = list(format_selector({
|
formats_to_download = list(format_selector({
|
||||||
'formats': formats,
|
'formats': formats,
|
||||||
'has_merged_format': any('none' not in (f.get('acodec'), f.get('vcodec')) for f in formats),
|
'has_merged_format': any('none' not in (f.get('acodec'), f.get('vcodec')) for f in formats),
|
||||||
@ -2759,11 +2806,13 @@ def to_screen(*msg):
|
|||||||
new_info.update(fmt)
|
new_info.update(fmt)
|
||||||
offset, duration = info_dict.get('section_start') or 0, info_dict.get('duration') or float('inf')
|
offset, duration = info_dict.get('section_start') or 0, info_dict.get('duration') or float('inf')
|
||||||
end_time = offset + min(chapter.get('end_time', duration), duration)
|
end_time = offset + min(chapter.get('end_time', duration), duration)
|
||||||
|
# duration may not be accurate. So allow deviations <1sec
|
||||||
|
if end_time == float('inf') or end_time > offset + duration + 1:
|
||||||
|
end_time = None
|
||||||
if chapter or offset:
|
if chapter or offset:
|
||||||
new_info.update({
|
new_info.update({
|
||||||
'section_start': offset + chapter.get('start_time', 0),
|
'section_start': offset + chapter.get('start_time', 0),
|
||||||
# duration may not be accurate. So allow deviations <1sec
|
'section_end': end_time,
|
||||||
'section_end': end_time if end_time <= offset + duration + 1 else None,
|
|
||||||
'section_title': chapter.get('title'),
|
'section_title': chapter.get('title'),
|
||||||
'section_number': chapter.get('index'),
|
'section_number': chapter.get('index'),
|
||||||
})
|
})
|
||||||
@ -2819,10 +2868,14 @@ def process_subtitles(self, video_id, normal_subtitles, automatic_captions):
|
|||||||
self.params.get('subtitleslangs'), {'all': all_sub_langs}, use_regex=True)
|
self.params.get('subtitleslangs'), {'all': all_sub_langs}, use_regex=True)
|
||||||
except re.error as e:
|
except re.error as e:
|
||||||
raise ValueError(f'Wrong regex for subtitlelangs: {e.pattern}')
|
raise ValueError(f'Wrong regex for subtitlelangs: {e.pattern}')
|
||||||
elif normal_sub_langs:
|
|
||||||
requested_langs = ['en'] if 'en' in normal_sub_langs else normal_sub_langs[:1]
|
|
||||||
else:
|
else:
|
||||||
requested_langs = ['en'] if 'en' in all_sub_langs else all_sub_langs[:1]
|
requested_langs = LazyList(itertools.chain(
|
||||||
|
['en'] if 'en' in normal_sub_langs else [],
|
||||||
|
filter(lambda f: f.startswith('en'), normal_sub_langs),
|
||||||
|
['en'] if 'en' in all_sub_langs else [],
|
||||||
|
filter(lambda f: f.startswith('en'), all_sub_langs),
|
||||||
|
normal_sub_langs, all_sub_langs,
|
||||||
|
))[:1]
|
||||||
if requested_langs:
|
if requested_langs:
|
||||||
self.to_screen(f'[info] {video_id}: Downloading subtitles: {", ".join(requested_langs)}')
|
self.to_screen(f'[info] {video_id}: Downloading subtitles: {", ".join(requested_langs)}')
|
||||||
|
|
||||||
@ -2854,6 +2907,12 @@ def _forceprint(self, key, info_dict):
|
|||||||
if info_dict is None:
|
if info_dict is None:
|
||||||
return
|
return
|
||||||
info_copy = info_dict.copy()
|
info_copy = info_dict.copy()
|
||||||
|
info_copy.setdefault('filename', self.prepare_filename(info_dict))
|
||||||
|
if info_dict.get('requested_formats') is not None:
|
||||||
|
# For RTMP URLs, also include the playpath
|
||||||
|
info_copy['urls'] = '\n'.join(f['url'] + f.get('play_path', '') for f in info_dict['requested_formats'])
|
||||||
|
elif info_dict.get('url'):
|
||||||
|
info_copy['urls'] = info_dict['url'] + info_dict.get('play_path', '')
|
||||||
info_copy['formats_table'] = self.render_formats_table(info_dict)
|
info_copy['formats_table'] = self.render_formats_table(info_dict)
|
||||||
info_copy['thumbnails_table'] = self.render_thumbnails_table(info_dict)
|
info_copy['thumbnails_table'] = self.render_thumbnails_table(info_dict)
|
||||||
info_copy['subtitles_table'] = self.render_subtitles_table(info_dict.get('id'), info_dict.get('subtitles'))
|
info_copy['subtitles_table'] = self.render_subtitles_table(info_dict.get('id'), info_dict.get('subtitles'))
|
||||||
@ -2866,7 +2925,7 @@ def format_tmpl(tmpl):
|
|||||||
|
|
||||||
fmt = '%({})s'
|
fmt = '%({})s'
|
||||||
if tmpl.startswith('{'):
|
if tmpl.startswith('{'):
|
||||||
tmpl = f'.{tmpl}'
|
tmpl, fmt = f'.{tmpl}', '%({})j'
|
||||||
if tmpl.endswith('='):
|
if tmpl.endswith('='):
|
||||||
tmpl, fmt = tmpl[:-1], '{0} = %({0})#j'
|
tmpl, fmt = tmpl[:-1], '{0} = %({0})#j'
|
||||||
return '\n'.join(map(fmt.format, [tmpl] if mobj.group('dict') else tmpl.split(',')))
|
return '\n'.join(map(fmt.format, [tmpl] if mobj.group('dict') else tmpl.split(',')))
|
||||||
@ -2879,46 +2938,36 @@ def format_tmpl(tmpl):
|
|||||||
tmpl = format_tmpl(tmpl)
|
tmpl = format_tmpl(tmpl)
|
||||||
self.to_screen(f'[info] Writing {tmpl!r} to: {filename}')
|
self.to_screen(f'[info] Writing {tmpl!r} to: {filename}')
|
||||||
if self._ensure_dir_exists(filename):
|
if self._ensure_dir_exists(filename):
|
||||||
with open(filename, 'a', encoding='utf-8') as f:
|
with open(filename, 'a', encoding='utf-8', newline='') as f:
|
||||||
f.write(self.evaluate_outtmpl(tmpl, info_copy) + '\n')
|
f.write(self.evaluate_outtmpl(tmpl, info_copy) + os.linesep)
|
||||||
|
|
||||||
def __forced_printings(self, info_dict, filename, incomplete):
|
return info_copy
|
||||||
def print_mandatory(field, actual_field=None):
|
|
||||||
if actual_field is None:
|
|
||||||
actual_field = field
|
|
||||||
if (self.params.get('force%s' % field, False)
|
|
||||||
and (not incomplete or info_dict.get(actual_field) is not None)):
|
|
||||||
self.to_stdout(info_dict[actual_field])
|
|
||||||
|
|
||||||
def print_optional(field):
|
|
||||||
if (self.params.get('force%s' % field, False)
|
|
||||||
and info_dict.get(field) is not None):
|
|
||||||
self.to_stdout(info_dict[field])
|
|
||||||
|
|
||||||
info_dict = info_dict.copy()
|
|
||||||
if filename is not None:
|
|
||||||
info_dict['filename'] = filename
|
|
||||||
if info_dict.get('requested_formats') is not None:
|
|
||||||
# For RTMP URLs, also include the playpath
|
|
||||||
info_dict['urls'] = '\n'.join(f['url'] + f.get('play_path', '') for f in info_dict['requested_formats'])
|
|
||||||
elif info_dict.get('url'):
|
|
||||||
info_dict['urls'] = info_dict['url'] + info_dict.get('play_path', '')
|
|
||||||
|
|
||||||
|
def __forced_printings(self, info_dict, filename=None, incomplete=True):
|
||||||
if (self.params.get('forcejson')
|
if (self.params.get('forcejson')
|
||||||
or self.params['forceprint'].get('video')
|
or self.params['forceprint'].get('video')
|
||||||
or self.params['print_to_file'].get('video')):
|
or self.params['print_to_file'].get('video')):
|
||||||
self.post_extract(info_dict)
|
self.post_extract(info_dict)
|
||||||
self._forceprint('video', info_dict)
|
if filename:
|
||||||
|
info_dict['filename'] = filename
|
||||||
|
info_copy = self._forceprint('video', info_dict)
|
||||||
|
|
||||||
print_mandatory('title')
|
def print_field(field, actual_field=None, optional=False):
|
||||||
print_mandatory('id')
|
if actual_field is None:
|
||||||
print_mandatory('url', 'urls')
|
actual_field = field
|
||||||
print_optional('thumbnail')
|
if self.params.get(f'force{field}') and (
|
||||||
print_optional('description')
|
info_copy.get(field) is not None or (not optional and not incomplete)):
|
||||||
print_optional('filename')
|
self.to_stdout(info_copy[actual_field])
|
||||||
if self.params.get('forceduration') and info_dict.get('duration') is not None:
|
|
||||||
self.to_stdout(formatSeconds(info_dict['duration']))
|
print_field('title')
|
||||||
print_mandatory('format')
|
print_field('id')
|
||||||
|
print_field('url', 'urls')
|
||||||
|
print_field('thumbnail', optional=True)
|
||||||
|
print_field('description', optional=True)
|
||||||
|
print_field('filename')
|
||||||
|
if self.params.get('forceduration') and info_copy.get('duration') is not None:
|
||||||
|
self.to_stdout(formatSeconds(info_copy['duration']))
|
||||||
|
print_field('format')
|
||||||
|
|
||||||
if self.params.get('forcejson'):
|
if self.params.get('forcejson'):
|
||||||
self.to_stdout(json.dumps(self.sanitize_info(info_dict)))
|
self.to_stdout(json.dumps(self.sanitize_info(info_dict)))
|
||||||
@ -3140,7 +3189,6 @@ def existing_video_file(*filepaths):
|
|||||||
return
|
return
|
||||||
|
|
||||||
if info_dict.get('requested_formats') is not None:
|
if info_dict.get('requested_formats') is not None:
|
||||||
requested_formats = info_dict['requested_formats']
|
|
||||||
old_ext = info_dict['ext']
|
old_ext = info_dict['ext']
|
||||||
if self.params.get('merge_output_format') is None:
|
if self.params.get('merge_output_format') is None:
|
||||||
if (info_dict['ext'] == 'webm'
|
if (info_dict['ext'] == 'webm'
|
||||||
@ -3167,19 +3215,22 @@ def correct_ext(filename, ext=new_ext):
|
|||||||
full_filename = correct_ext(full_filename)
|
full_filename = correct_ext(full_filename)
|
||||||
temp_filename = correct_ext(temp_filename)
|
temp_filename = correct_ext(temp_filename)
|
||||||
dl_filename = existing_video_file(full_filename, temp_filename)
|
dl_filename = existing_video_file(full_filename, temp_filename)
|
||||||
|
|
||||||
info_dict['__real_download'] = False
|
info_dict['__real_download'] = False
|
||||||
|
# NOTE: Copy so that original format dicts are not modified
|
||||||
|
info_dict['requested_formats'] = list(map(dict, info_dict['requested_formats']))
|
||||||
|
|
||||||
merger = FFmpegMergerPP(self)
|
merger = FFmpegMergerPP(self)
|
||||||
downloaded = []
|
downloaded = []
|
||||||
if dl_filename is not None:
|
if dl_filename is not None:
|
||||||
self.report_file_already_downloaded(dl_filename)
|
self.report_file_already_downloaded(dl_filename)
|
||||||
elif fd:
|
elif fd:
|
||||||
for f in requested_formats if fd != FFmpegFD else []:
|
for f in info_dict['requested_formats'] if fd != FFmpegFD else []:
|
||||||
f['filepath'] = fname = prepend_extension(
|
f['filepath'] = fname = prepend_extension(
|
||||||
correct_ext(temp_filename, info_dict['ext']),
|
correct_ext(temp_filename, info_dict['ext']),
|
||||||
'f%s' % f['format_id'], info_dict['ext'])
|
'f%s' % f['format_id'], info_dict['ext'])
|
||||||
downloaded.append(fname)
|
downloaded.append(fname)
|
||||||
info_dict['url'] = '\n'.join(f['url'] for f in requested_formats)
|
info_dict['url'] = '\n'.join(f['url'] for f in info_dict['requested_formats'])
|
||||||
success, real_download = self.dl(temp_filename, info_dict)
|
success, real_download = self.dl(temp_filename, info_dict)
|
||||||
info_dict['__real_download'] = real_download
|
info_dict['__real_download'] = real_download
|
||||||
else:
|
else:
|
||||||
@ -3203,7 +3254,7 @@ def correct_ext(filename, ext=new_ext):
|
|||||||
f'You have requested downloading multiple formats to stdout {reason}. '
|
f'You have requested downloading multiple formats to stdout {reason}. '
|
||||||
'The formats will be streamed one after the other')
|
'The formats will be streamed one after the other')
|
||||||
fname = temp_filename
|
fname = temp_filename
|
||||||
for f in requested_formats:
|
for f in info_dict['requested_formats']:
|
||||||
new_info = dict(info_dict)
|
new_info = dict(info_dict)
|
||||||
del new_info['requested_formats']
|
del new_info['requested_formats']
|
||||||
new_info.update(f)
|
new_info.update(f)
|
||||||
@ -3301,7 +3352,7 @@ def ffmpeg_fixup(cndn, msg, cls):
|
|||||||
or info_dict.get('is_live') and self.params.get('hls_use_mpegts') is None,
|
or info_dict.get('is_live') and self.params.get('hls_use_mpegts') is None,
|
||||||
'Possible MPEG-TS in MP4 container or malformed AAC timestamps',
|
'Possible MPEG-TS in MP4 container or malformed AAC timestamps',
|
||||||
FFmpegFixupM3u8PP)
|
FFmpegFixupM3u8PP)
|
||||||
ffmpeg_fixup(info_dict.get('is_live') and downloader == 'DashSegmentsFD',
|
ffmpeg_fixup(info_dict.get('is_live') and downloader == 'dashsegments',
|
||||||
'Possible duplicate MOOV atoms', FFmpegFixupDuplicateMoovPP)
|
'Possible duplicate MOOV atoms', FFmpegFixupDuplicateMoovPP)
|
||||||
|
|
||||||
ffmpeg_fixup(downloader == 'web_socket_fragment', 'Malformed timestamps detected', FFmpegFixupTimestampPP)
|
ffmpeg_fixup(downloader == 'web_socket_fragment', 'Malformed timestamps detected', FFmpegFixupTimestampPP)
|
||||||
@ -3365,18 +3416,19 @@ def download_with_info_file(self, info_filename):
|
|||||||
[info_filename], mode='r',
|
[info_filename], mode='r',
|
||||||
openhook=fileinput.hook_encoded('utf-8'))) as f:
|
openhook=fileinput.hook_encoded('utf-8'))) as f:
|
||||||
# FileInput doesn't have a read method, we can't call json.load
|
# FileInput doesn't have a read method, we can't call json.load
|
||||||
info = self.sanitize_info(json.loads('\n'.join(f)), self.params.get('clean_infojson', True))
|
infos = [self.sanitize_info(info, self.params.get('clean_infojson', True))
|
||||||
try:
|
for info in variadic(json.loads('\n'.join(f)))]
|
||||||
self.__download_wrapper(self.process_ie_result)(info, download=True)
|
for info in infos:
|
||||||
except (DownloadError, EntryNotInPlaylist, ReExtractInfo) as e:
|
try:
|
||||||
if not isinstance(e, EntryNotInPlaylist):
|
self.__download_wrapper(self.process_ie_result)(info, download=True)
|
||||||
self.to_stderr('\r')
|
except (DownloadError, EntryNotInPlaylist, ReExtractInfo) as e:
|
||||||
webpage_url = info.get('webpage_url')
|
if not isinstance(e, EntryNotInPlaylist):
|
||||||
if webpage_url is not None:
|
self.to_stderr('\r')
|
||||||
|
webpage_url = info.get('webpage_url')
|
||||||
|
if webpage_url is None:
|
||||||
|
raise
|
||||||
self.report_warning(f'The info failed to download: {e}; trying with URL {webpage_url}')
|
self.report_warning(f'The info failed to download: {e}; trying with URL {webpage_url}')
|
||||||
return self.download([webpage_url])
|
self.download([webpage_url])
|
||||||
else:
|
|
||||||
raise
|
|
||||||
return self._download_retcode
|
return self._download_retcode
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
@ -3396,8 +3448,8 @@ def sanitize_info(info_dict, remove_private_keys=False):
|
|||||||
if remove_private_keys:
|
if remove_private_keys:
|
||||||
reject = lambda k, v: v is None or k.startswith('__') or k in {
|
reject = lambda k, v: v is None or k.startswith('__') or k in {
|
||||||
'requested_downloads', 'requested_formats', 'requested_subtitles', 'requested_entries',
|
'requested_downloads', 'requested_formats', 'requested_subtitles', 'requested_entries',
|
||||||
'entries', 'filepath', '_filename', 'infojson_filename', 'original_url', 'playlist_autonumber',
|
'entries', 'filepath', '_filename', 'filename', 'infojson_filename', 'original_url',
|
||||||
'_format_sort_fields',
|
'playlist_autonumber', '_format_sort_fields',
|
||||||
}
|
}
|
||||||
else:
|
else:
|
||||||
reject = lambda k, v: False
|
reject = lambda k, v: False
|
||||||
@ -3658,8 +3710,11 @@ def simplified_codec(f, field):
|
|||||||
format_field(f, 'fps', '\t%d', func=round),
|
format_field(f, 'fps', '\t%d', func=round),
|
||||||
format_field(f, 'dynamic_range', '%s', ignore=(None, 'SDR')).replace('HDR', ''),
|
format_field(f, 'dynamic_range', '%s', ignore=(None, 'SDR')).replace('HDR', ''),
|
||||||
format_field(f, 'audio_channels', '\t%s'),
|
format_field(f, 'audio_channels', '\t%s'),
|
||||||
delim,
|
delim, (
|
||||||
format_field(f, 'filesize', ' \t%s', func=format_bytes) + format_field(f, 'filesize_approx', '~\t%s', func=format_bytes),
|
format_field(f, 'filesize', ' \t%s', func=format_bytes)
|
||||||
|
or format_field(f, 'filesize_approx', '≈\t%s', func=format_bytes)
|
||||||
|
or format_field(try_call(lambda: format_bytes(int(info_dict['duration'] * f['tbr'] * (1024 / 8)))),
|
||||||
|
None, self._format_out('~\t%s', self.Styles.SUPPRESS))),
|
||||||
format_field(f, 'tbr', '\t%dk', func=round),
|
format_field(f, 'tbr', '\t%dk', func=round),
|
||||||
shorten_protocol_name(f.get('protocol', '')),
|
shorten_protocol_name(f.get('protocol', '')),
|
||||||
delim,
|
delim,
|
||||||
@ -3670,6 +3725,7 @@ def simplified_codec(f, field):
|
|||||||
format_field(f, 'asr', '\t%s', func=format_decimal_suffix),
|
format_field(f, 'asr', '\t%s', func=format_decimal_suffix),
|
||||||
join_nonempty(
|
join_nonempty(
|
||||||
self._format_out('UNSUPPORTED', 'light red') if f.get('ext') in ('f4f', 'f4m') else None,
|
self._format_out('UNSUPPORTED', 'light red') if f.get('ext') in ('f4f', 'f4m') else None,
|
||||||
|
self._format_out('DRM', 'light red') if f.get('has_drm') else None,
|
||||||
format_field(f, 'language', '[%s]'),
|
format_field(f, 'language', '[%s]'),
|
||||||
join_nonempty(format_field(f, 'format_note'),
|
join_nonempty(format_field(f, 'format_note'),
|
||||||
format_field(f, 'container', ignore=(None, f.get('ext'))),
|
format_field(f, 'container', ignore=(None, f.get('ext'))),
|
||||||
@ -3744,9 +3800,14 @@ def print_debug_header(self):
|
|||||||
|
|
||||||
def get_encoding(stream):
|
def get_encoding(stream):
|
||||||
ret = str(getattr(stream, 'encoding', 'missing (%s)' % type(stream).__name__))
|
ret = str(getattr(stream, 'encoding', 'missing (%s)' % type(stream).__name__))
|
||||||
|
additional_info = []
|
||||||
|
if os.environ.get('TERM', '').lower() == 'dumb':
|
||||||
|
additional_info.append('dumb')
|
||||||
if not supports_terminal_sequences(stream):
|
if not supports_terminal_sequences(stream):
|
||||||
from .utils import WINDOWS_VT_MODE # Must be imported locally
|
from .utils import WINDOWS_VT_MODE # Must be imported locally
|
||||||
ret += ' (No VT)' if WINDOWS_VT_MODE is False else ' (No ANSI)'
|
additional_info.append('No VT' if WINDOWS_VT_MODE is False else 'No ANSI')
|
||||||
|
if additional_info:
|
||||||
|
ret = f'{ret} ({",".join(additional_info)})'
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
encoding_str = 'Encodings: locale %s, fs %s, pref %s, %s' % (
|
encoding_str = 'Encodings: locale %s, fs %s, pref %s, %s' % (
|
||||||
@ -3769,12 +3830,13 @@ def get_encoding(stream):
|
|||||||
source = detect_variant()
|
source = detect_variant()
|
||||||
if VARIANT not in (None, 'pip'):
|
if VARIANT not in (None, 'pip'):
|
||||||
source += '*'
|
source += '*'
|
||||||
|
klass = type(self)
|
||||||
write_debug(join_nonempty(
|
write_debug(join_nonempty(
|
||||||
f'{"yt-dlp" if REPOSITORY == "yt-dlp/yt-dlp" else REPOSITORY} version',
|
f'{"yt-dlp" if REPOSITORY == "yt-dlp/yt-dlp" else REPOSITORY} version',
|
||||||
__version__,
|
f'{CHANNEL}@{__version__}',
|
||||||
f'[{RELEASE_GIT_HEAD}]' if RELEASE_GIT_HEAD else '',
|
f'[{RELEASE_GIT_HEAD[:9]}]' if RELEASE_GIT_HEAD else '',
|
||||||
'' if source == 'unknown' else f'({source})',
|
'' if source == 'unknown' else f'({source})',
|
||||||
'' if _IN_CLI else 'API',
|
'' if _IN_CLI else 'API' if klass == YoutubeDL else f'API:{self.__module__}.{klass.__qualname__}',
|
||||||
delim=' '))
|
delim=' '))
|
||||||
|
|
||||||
if not _IN_CLI:
|
if not _IN_CLI:
|
||||||
@ -3970,7 +4032,7 @@ def _write_subtitles(self, info_dict, filename):
|
|||||||
# that way it will silently go on when used with unsupporting IE
|
# that way it will silently go on when used with unsupporting IE
|
||||||
return ret
|
return ret
|
||||||
elif not subtitles:
|
elif not subtitles:
|
||||||
self.to_screen('[info] There\'s no subtitles for the requested languages')
|
self.to_screen('[info] There are no subtitles for the requested languages')
|
||||||
return ret
|
return ret
|
||||||
sub_filename_base = self.prepare_filename(info_dict, 'subtitle')
|
sub_filename_base = self.prepare_filename(info_dict, 'subtitle')
|
||||||
if not sub_filename_base:
|
if not sub_filename_base:
|
||||||
@ -4024,7 +4086,7 @@ def _write_thumbnails(self, label, info_dict, filename, thumb_filename_base=None
|
|||||||
if write_all or self.params.get('writethumbnail', False):
|
if write_all or self.params.get('writethumbnail', False):
|
||||||
thumbnails = info_dict.get('thumbnails') or []
|
thumbnails = info_dict.get('thumbnails') or []
|
||||||
if not thumbnails:
|
if not thumbnails:
|
||||||
self.to_screen(f'[info] There\'s no {label} thumbnails to download')
|
self.to_screen(f'[info] There are no {label} thumbnails to download')
|
||||||
return ret
|
return ret
|
||||||
multiple = write_all and len(thumbnails) > 1
|
multiple = write_all and len(thumbnails) > 1
|
||||||
|
|
||||||
@ -4056,8 +4118,11 @@ def _write_thumbnails(self, label, info_dict, filename, thumb_filename_base=None
|
|||||||
ret.append((thumb_filename, thumb_filename_final))
|
ret.append((thumb_filename, thumb_filename_final))
|
||||||
t['filepath'] = thumb_filename
|
t['filepath'] = thumb_filename
|
||||||
except network_exceptions as err:
|
except network_exceptions as err:
|
||||||
|
if isinstance(err, urllib.error.HTTPError) and err.code == 404:
|
||||||
|
self.to_screen(f'[info] {thumb_display_id.title()} does not exist')
|
||||||
|
else:
|
||||||
|
self.report_warning(f'Unable to download {thumb_display_id}: {err}')
|
||||||
thumbnails.pop(idx)
|
thumbnails.pop(idx)
|
||||||
self.report_warning(f'Unable to download {thumb_display_id}: {err}')
|
|
||||||
if ret and not write_all:
|
if ret and not write_all:
|
||||||
break
|
break
|
||||||
return ret
|
return ret
|
||||||
|
@ -13,6 +13,7 @@
|
|||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
import sys
|
import sys
|
||||||
|
import traceback
|
||||||
|
|
||||||
from .compat import compat_shlex_quote
|
from .compat import compat_shlex_quote
|
||||||
from .cookies import SUPPORTED_BROWSERS, SUPPORTED_KEYRINGS
|
from .cookies import SUPPORTED_BROWSERS, SUPPORTED_KEYRINGS
|
||||||
@ -187,8 +188,8 @@ def validate_minmax(min_val, max_val, min_name, max_name=None):
|
|||||||
raise ValueError(f'{max_name} "{max_val}" must be must be greater than or equal to {min_name} "{min_val}"')
|
raise ValueError(f'{max_name} "{max_val}" must be must be greater than or equal to {min_name} "{min_val}"')
|
||||||
|
|
||||||
# Usernames and passwords
|
# Usernames and passwords
|
||||||
validate(not opts.usenetrc or (opts.username is None and opts.password is None),
|
validate(sum(map(bool, (opts.usenetrc, opts.netrc_cmd, opts.username))) <= 1, '.netrc',
|
||||||
'.netrc', msg='using {name} conflicts with giving username/password')
|
msg='{name}, netrc command and username/password are mutually exclusive options')
|
||||||
validate(opts.password is None or opts.username is not None, 'account username', msg='{name} missing')
|
validate(opts.password is None or opts.username is not None, 'account username', msg='{name} missing')
|
||||||
validate(opts.ap_password is None or opts.ap_username is not None,
|
validate(opts.ap_password is None or opts.ap_username is not None,
|
||||||
'TV Provider account username', msg='{name} missing')
|
'TV Provider account username', msg='{name} missing')
|
||||||
@ -318,31 +319,50 @@ def validate_outtmpl(tmpl, msg):
|
|||||||
if outtmpl_default == '':
|
if outtmpl_default == '':
|
||||||
opts.skip_download = None
|
opts.skip_download = None
|
||||||
del opts.outtmpl['default']
|
del opts.outtmpl['default']
|
||||||
if outtmpl_default and not os.path.splitext(outtmpl_default)[1] and opts.extractaudio:
|
|
||||||
raise ValueError(
|
|
||||||
'Cannot download a video and extract audio into the same file! '
|
|
||||||
f'Use "{outtmpl_default}.%(ext)s" instead of "{outtmpl_default}" as the output template')
|
|
||||||
|
|
||||||
def parse_chapters(name, value):
|
def parse_chapters(name, value, advanced=False):
|
||||||
chapters, ranges = [], []
|
|
||||||
parse_timestamp = lambda x: float('inf') if x in ('inf', 'infinite') else parse_duration(x)
|
parse_timestamp = lambda x: float('inf') if x in ('inf', 'infinite') else parse_duration(x)
|
||||||
for regex in value or []:
|
TIMESTAMP_RE = r'''(?x)(?:
|
||||||
if regex.startswith('*'):
|
(?P<start_sign>-?)(?P<start>[^-]+)
|
||||||
for range_ in map(str.strip, regex[1:].split(',')):
|
)?\s*-\s*(?:
|
||||||
mobj = range_ != '-' and re.fullmatch(r'([^-]+)?\s*-\s*([^-]+)?', range_)
|
(?P<end_sign>-?)(?P<end>[^-]+)
|
||||||
dur = mobj and (parse_timestamp(mobj.group(1) or '0'), parse_timestamp(mobj.group(2) or 'inf'))
|
)?'''
|
||||||
if None in (dur or [None]):
|
|
||||||
raise ValueError(f'invalid {name} time range "{regex}". Must be of the form "*start-end"')
|
|
||||||
ranges.append(dur)
|
|
||||||
continue
|
|
||||||
try:
|
|
||||||
chapters.append(re.compile(regex))
|
|
||||||
except re.error as err:
|
|
||||||
raise ValueError(f'invalid {name} regex "{regex}" - {err}')
|
|
||||||
return chapters, ranges
|
|
||||||
|
|
||||||
opts.remove_chapters, opts.remove_ranges = parse_chapters('--remove-chapters', opts.remove_chapters)
|
chapters, ranges, from_url = [], [], False
|
||||||
opts.download_ranges = download_range_func(*parse_chapters('--download-sections', opts.download_ranges))
|
for regex in value or []:
|
||||||
|
if advanced and regex == '*from-url':
|
||||||
|
from_url = True
|
||||||
|
continue
|
||||||
|
elif not regex.startswith('*'):
|
||||||
|
try:
|
||||||
|
chapters.append(re.compile(regex))
|
||||||
|
except re.error as err:
|
||||||
|
raise ValueError(f'invalid {name} regex "{regex}" - {err}')
|
||||||
|
continue
|
||||||
|
|
||||||
|
for range_ in map(str.strip, regex[1:].split(',')):
|
||||||
|
mobj = range_ != '-' and re.fullmatch(TIMESTAMP_RE, range_)
|
||||||
|
dur = mobj and [parse_timestamp(mobj.group('start') or '0'), parse_timestamp(mobj.group('end') or 'inf')]
|
||||||
|
signs = mobj and (mobj.group('start_sign'), mobj.group('end_sign'))
|
||||||
|
|
||||||
|
err = None
|
||||||
|
if None in (dur or [None]):
|
||||||
|
err = 'Must be of the form "*start-end"'
|
||||||
|
elif not advanced and any(signs):
|
||||||
|
err = 'Negative timestamps are not allowed'
|
||||||
|
else:
|
||||||
|
dur[0] *= -1 if signs[0] else 1
|
||||||
|
dur[1] *= -1 if signs[1] else 1
|
||||||
|
if dur[1] == float('-inf'):
|
||||||
|
err = '"-inf" is not a valid end'
|
||||||
|
if err:
|
||||||
|
raise ValueError(f'invalid {name} time range "{regex}". {err}')
|
||||||
|
ranges.append(dur)
|
||||||
|
|
||||||
|
return chapters, ranges, from_url
|
||||||
|
|
||||||
|
opts.remove_chapters, opts.remove_ranges, _ = parse_chapters('--remove-chapters', opts.remove_chapters)
|
||||||
|
opts.download_ranges = download_range_func(*parse_chapters('--download-sections', opts.download_ranges, True))
|
||||||
|
|
||||||
# Cookies from browser
|
# Cookies from browser
|
||||||
if opts.cookiesfrombrowser:
|
if opts.cookiesfrombrowser:
|
||||||
@ -400,14 +420,19 @@ def metadataparser_actions(f):
|
|||||||
except Exception as err:
|
except Exception as err:
|
||||||
raise ValueError(f'Invalid playlist-items {opts.playlist_items!r}: {err}')
|
raise ValueError(f'Invalid playlist-items {opts.playlist_items!r}: {err}')
|
||||||
|
|
||||||
geo_bypass_code = opts.geo_bypass_ip_block or opts.geo_bypass_country
|
opts.geo_bypass_country, opts.geo_bypass_ip_block = None, None
|
||||||
if geo_bypass_code is not None:
|
if opts.geo_bypass.lower() not in ('default', 'never'):
|
||||||
try:
|
try:
|
||||||
GeoUtils.random_ipv4(geo_bypass_code)
|
GeoUtils.random_ipv4(opts.geo_bypass)
|
||||||
except Exception:
|
except Exception:
|
||||||
raise ValueError('unsupported geo-bypass country or ip-block')
|
raise ValueError(f'Unsupported --xff "{opts.geo_bypass}"')
|
||||||
|
if len(opts.geo_bypass) == 2:
|
||||||
|
opts.geo_bypass_country = opts.geo_bypass
|
||||||
|
else:
|
||||||
|
opts.geo_bypass_ip_block = opts.geo_bypass
|
||||||
|
opts.geo_bypass = opts.geo_bypass.lower() != 'never'
|
||||||
|
|
||||||
opts.match_filter = match_filter_func(opts.match_filter)
|
opts.match_filter = match_filter_func(opts.match_filter, opts.breaking_match_filter)
|
||||||
|
|
||||||
if opts.download_archive is not None:
|
if opts.download_archive is not None:
|
||||||
opts.download_archive = expand_path(opts.download_archive)
|
opts.download_archive = expand_path(opts.download_archive)
|
||||||
@ -434,6 +459,10 @@ def metadataparser_actions(f):
|
|||||||
elif ed and proto == 'default':
|
elif ed and proto == 'default':
|
||||||
default_downloader = ed.get_basename()
|
default_downloader = ed.get_basename()
|
||||||
|
|
||||||
|
for policy in opts.color.values():
|
||||||
|
if policy not in ('always', 'auto', 'no_color', 'never'):
|
||||||
|
raise ValueError(f'"{policy}" is not a valid color policy')
|
||||||
|
|
||||||
warnings, deprecation_warnings = [], []
|
warnings, deprecation_warnings = [], []
|
||||||
|
|
||||||
# Common mistake: -f best
|
# Common mistake: -f best
|
||||||
@ -708,6 +737,8 @@ def parse_options(argv=None):
|
|||||||
'dumpjson', 'dump_single_json', 'getdescription', 'getduration', 'getfilename',
|
'dumpjson', 'dump_single_json', 'getdescription', 'getduration', 'getfilename',
|
||||||
'getformat', 'getid', 'getthumbnail', 'gettitle', 'geturl'
|
'getformat', 'getid', 'getthumbnail', 'gettitle', 'geturl'
|
||||||
))
|
))
|
||||||
|
if opts.quiet is None:
|
||||||
|
opts.quiet = any_getting or opts.print_json or bool(opts.forceprint)
|
||||||
|
|
||||||
playlist_pps = [pp for pp in postprocessors if pp.get('when') == 'playlist']
|
playlist_pps = [pp for pp in postprocessors if pp.get('when') == 'playlist']
|
||||||
write_playlist_infojson = (opts.writeinfojson and not opts.clean_infojson
|
write_playlist_infojson = (opts.writeinfojson and not opts.clean_infojson
|
||||||
@ -733,6 +764,7 @@ def parse_options(argv=None):
|
|||||||
return ParsedOptions(parser, opts, urls, {
|
return ParsedOptions(parser, opts, urls, {
|
||||||
'usenetrc': opts.usenetrc,
|
'usenetrc': opts.usenetrc,
|
||||||
'netrc_location': opts.netrc_location,
|
'netrc_location': opts.netrc_location,
|
||||||
|
'netrc_cmd': opts.netrc_cmd,
|
||||||
'username': opts.username,
|
'username': opts.username,
|
||||||
'password': opts.password,
|
'password': opts.password,
|
||||||
'twofactor': opts.twofactor,
|
'twofactor': opts.twofactor,
|
||||||
@ -743,7 +775,7 @@ def parse_options(argv=None):
|
|||||||
'client_certificate': opts.client_certificate,
|
'client_certificate': opts.client_certificate,
|
||||||
'client_certificate_key': opts.client_certificate_key,
|
'client_certificate_key': opts.client_certificate_key,
|
||||||
'client_certificate_password': opts.client_certificate_password,
|
'client_certificate_password': opts.client_certificate_password,
|
||||||
'quiet': opts.quiet or any_getting or opts.print_json or bool(opts.forceprint),
|
'quiet': opts.quiet,
|
||||||
'no_warnings': opts.no_warnings,
|
'no_warnings': opts.no_warnings,
|
||||||
'forceurl': opts.geturl,
|
'forceurl': opts.geturl,
|
||||||
'forcetitle': opts.gettitle,
|
'forcetitle': opts.gettitle,
|
||||||
@ -890,7 +922,7 @@ def parse_options(argv=None):
|
|||||||
'playlist_items': opts.playlist_items,
|
'playlist_items': opts.playlist_items,
|
||||||
'xattr_set_filesize': opts.xattr_set_filesize,
|
'xattr_set_filesize': opts.xattr_set_filesize,
|
||||||
'match_filter': opts.match_filter,
|
'match_filter': opts.match_filter,
|
||||||
'no_color': opts.no_color,
|
'color': opts.color,
|
||||||
'ffmpeg_location': opts.ffmpeg_location,
|
'ffmpeg_location': opts.ffmpeg_location,
|
||||||
'hls_prefer_native': opts.hls_prefer_native,
|
'hls_prefer_native': opts.hls_prefer_native,
|
||||||
'hls_use_mpegts': opts.hls_use_mpegts,
|
'hls_use_mpegts': opts.hls_use_mpegts,
|
||||||
@ -934,14 +966,18 @@ def _real_main(argv=None):
|
|||||||
if opts.rm_cachedir:
|
if opts.rm_cachedir:
|
||||||
ydl.cache.remove()
|
ydl.cache.remove()
|
||||||
|
|
||||||
updater = Updater(ydl)
|
try:
|
||||||
if opts.update_self and updater.update() and actual_use:
|
updater = Updater(ydl, opts.update_self)
|
||||||
if updater.cmd:
|
if opts.update_self and updater.update() and actual_use:
|
||||||
return updater.restart()
|
if updater.cmd:
|
||||||
# This code is reachable only for zip variant in py < 3.10
|
return updater.restart()
|
||||||
# It makes sense to exit here, but the old behavior is to continue
|
# This code is reachable only for zip variant in py < 3.10
|
||||||
ydl.report_warning('Restart yt-dlp to use the updated version')
|
# It makes sense to exit here, but the old behavior is to continue
|
||||||
# return 100, 'ERROR: The program must exit for the update to complete'
|
ydl.report_warning('Restart yt-dlp to use the updated version')
|
||||||
|
# return 100, 'ERROR: The program must exit for the update to complete'
|
||||||
|
except Exception:
|
||||||
|
traceback.print_exc()
|
||||||
|
ydl._download_retcode = 100
|
||||||
|
|
||||||
if not actual_use:
|
if not actual_use:
|
||||||
if pre_process:
|
if pre_process:
|
||||||
@ -955,6 +991,8 @@ def _real_main(argv=None):
|
|||||||
parser.destroy()
|
parser.destroy()
|
||||||
try:
|
try:
|
||||||
if opts.load_info_filename is not None:
|
if opts.load_info_filename is not None:
|
||||||
|
if all_urls:
|
||||||
|
ydl.report_warning('URLs are ignored due to --load-info-json')
|
||||||
return ydl.download_with_info_file(expand_path(opts.load_info_filename))
|
return ydl.download_with_info_file(expand_path(opts.load_info_filename))
|
||||||
else:
|
else:
|
||||||
return ydl.download(all_urls)
|
return ydl.download(all_urls)
|
||||||
|
5
yt_dlp/__pyinstaller/__init__.py
Normal file
5
yt_dlp/__pyinstaller/__init__.py
Normal file
@ -0,0 +1,5 @@
|
|||||||
|
import os
|
||||||
|
|
||||||
|
|
||||||
|
def get_hook_dirs():
|
||||||
|
return [os.path.dirname(__file__)]
|
31
yt_dlp/__pyinstaller/hook-yt_dlp.py
Normal file
31
yt_dlp/__pyinstaller/hook-yt_dlp.py
Normal file
@ -0,0 +1,31 @@
|
|||||||
|
import sys
|
||||||
|
|
||||||
|
from PyInstaller.utils.hooks import collect_submodules
|
||||||
|
|
||||||
|
|
||||||
|
def pycryptodome_module():
|
||||||
|
try:
|
||||||
|
import Cryptodome # noqa: F401
|
||||||
|
except ImportError:
|
||||||
|
try:
|
||||||
|
import Crypto # noqa: F401
|
||||||
|
print('WARNING: Using Crypto since Cryptodome is not available. '
|
||||||
|
'Install with: pip install pycryptodomex', file=sys.stderr)
|
||||||
|
return 'Crypto'
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
return 'Cryptodome'
|
||||||
|
|
||||||
|
|
||||||
|
def get_hidden_imports():
|
||||||
|
yield 'yt_dlp.compat._legacy'
|
||||||
|
yield pycryptodome_module()
|
||||||
|
yield from collect_submodules('websockets')
|
||||||
|
# These are auto-detected, but explicitly add them just in case
|
||||||
|
yield from ('mutagen', 'brotli', 'certifi')
|
||||||
|
|
||||||
|
|
||||||
|
hiddenimports = list(get_hidden_imports())
|
||||||
|
print(f'Adding imports: {hiddenimports}')
|
||||||
|
|
||||||
|
excludedimports = ['youtube_dl', 'youtube_dlc', 'test', 'ytdlp_plugins', 'devscripts']
|
@ -2,17 +2,17 @@
|
|||||||
from math import ceil
|
from math import ceil
|
||||||
|
|
||||||
from .compat import compat_ord
|
from .compat import compat_ord
|
||||||
from .dependencies import Cryptodome_AES
|
from .dependencies import Cryptodome
|
||||||
from .utils import bytes_to_intlist, intlist_to_bytes
|
from .utils import bytes_to_intlist, intlist_to_bytes
|
||||||
|
|
||||||
if Cryptodome_AES:
|
if Cryptodome.AES:
|
||||||
def aes_cbc_decrypt_bytes(data, key, iv):
|
def aes_cbc_decrypt_bytes(data, key, iv):
|
||||||
""" Decrypt bytes with AES-CBC using pycryptodome """
|
""" Decrypt bytes with AES-CBC using pycryptodome """
|
||||||
return Cryptodome_AES.new(key, Cryptodome_AES.MODE_CBC, iv).decrypt(data)
|
return Cryptodome.AES.new(key, Cryptodome.AES.MODE_CBC, iv).decrypt(data)
|
||||||
|
|
||||||
def aes_gcm_decrypt_and_verify_bytes(data, key, tag, nonce):
|
def aes_gcm_decrypt_and_verify_bytes(data, key, tag, nonce):
|
||||||
""" Decrypt bytes with AES-GCM using pycryptodome """
|
""" Decrypt bytes with AES-GCM using pycryptodome """
|
||||||
return Cryptodome_AES.new(key, Cryptodome_AES.MODE_GCM, nonce).decrypt_and_verify(data, tag)
|
return Cryptodome.AES.new(key, Cryptodome.AES.MODE_GCM, nonce).decrypt_and_verify(data, tag)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
def aes_cbc_decrypt_bytes(data, key, iv):
|
def aes_cbc_decrypt_bytes(data, key, iv):
|
||||||
|
@ -1,5 +1,4 @@
|
|||||||
import contextlib
|
import contextlib
|
||||||
import errno
|
|
||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
@ -39,11 +38,7 @@ def store(self, section, key, data, dtype='json'):
|
|||||||
|
|
||||||
fn = self._get_cache_fn(section, key, dtype)
|
fn = self._get_cache_fn(section, key, dtype)
|
||||||
try:
|
try:
|
||||||
try:
|
os.makedirs(os.path.dirname(fn), exist_ok=True)
|
||||||
os.makedirs(os.path.dirname(fn))
|
|
||||||
except OSError as ose:
|
|
||||||
if ose.errno != errno.EEXIST:
|
|
||||||
raise
|
|
||||||
self._ydl.write_debug(f'Saving {section}.{key} to cache')
|
self._ydl.write_debug(f'Saving {section}.{key} to cache')
|
||||||
write_json_file({'yt-dlp_version': __version__, 'data': data}, fn)
|
write_json_file({'yt-dlp_version': __version__, 'data': data}, fn)
|
||||||
except Exception:
|
except Exception:
|
||||||
|
5
yt_dlp/casefold.py
Normal file
5
yt_dlp/casefold.py
Normal file
@ -0,0 +1,5 @@
|
|||||||
|
import warnings
|
||||||
|
|
||||||
|
warnings.warn(DeprecationWarning(f'{__name__} is deprecated'))
|
||||||
|
|
||||||
|
casefold = str.casefold
|
@ -8,7 +8,7 @@
|
|||||||
|
|
||||||
# XXX: Implement this the same way as other DeprecationWarnings without circular import
|
# XXX: Implement this the same way as other DeprecationWarnings without circular import
|
||||||
passthrough_module(__name__, '._legacy', callback=lambda attr: warnings.warn(
|
passthrough_module(__name__, '._legacy', callback=lambda attr: warnings.warn(
|
||||||
DeprecationWarning(f'{__name__}.{attr} is deprecated'), stacklevel=3))
|
DeprecationWarning(f'{__name__}.{attr} is deprecated'), stacklevel=5))
|
||||||
|
|
||||||
|
|
||||||
# HTMLParseError has been deprecated in Python 3.3 and removed in
|
# HTMLParseError has been deprecated in Python 3.3 and removed in
|
||||||
@ -70,9 +70,3 @@ def compat_expanduser(path):
|
|||||||
return userhome + path[i:]
|
return userhome + path[i:]
|
||||||
else:
|
else:
|
||||||
compat_expanduser = os.path.expanduser
|
compat_expanduser = os.path.expanduser
|
||||||
|
|
||||||
|
|
||||||
# NB: Add modules that are imported dynamically here so that PyInstaller can find them
|
|
||||||
# See https://github.com/pyinstaller/pyinstaller-hooks-contrib/issues/438
|
|
||||||
if False:
|
|
||||||
from . import _legacy # noqa: F401
|
|
||||||
|
@ -1,5 +1,6 @@
|
|||||||
""" Do not use! """
|
""" Do not use! """
|
||||||
|
|
||||||
|
import base64
|
||||||
import collections
|
import collections
|
||||||
import ctypes
|
import ctypes
|
||||||
import getpass
|
import getpass
|
||||||
@ -29,10 +30,11 @@
|
|||||||
from re import Pattern as compat_Pattern # noqa: F401
|
from re import Pattern as compat_Pattern # noqa: F401
|
||||||
from re import match as compat_Match # noqa: F401
|
from re import match as compat_Match # noqa: F401
|
||||||
|
|
||||||
|
from . import compat_expanduser, compat_HTMLParseError, compat_realpath
|
||||||
from .compat_utils import passthrough_module
|
from .compat_utils import passthrough_module
|
||||||
from ..dependencies import Cryptodome_AES as compat_pycrypto_AES # noqa: F401
|
|
||||||
from ..dependencies import brotli as compat_brotli # noqa: F401
|
from ..dependencies import brotli as compat_brotli # noqa: F401
|
||||||
from ..dependencies import websockets as compat_websockets # noqa: F401
|
from ..dependencies import websockets as compat_websockets # noqa: F401
|
||||||
|
from ..dependencies.Cryptodome import AES as compat_pycrypto_AES # noqa: F401
|
||||||
|
|
||||||
passthrough_module(__name__, '...utils', ('WINDOWS_VT_MODE', 'windows_enable_vt_mode'))
|
passthrough_module(__name__, '...utils', ('WINDOWS_VT_MODE', 'windows_enable_vt_mode'))
|
||||||
|
|
||||||
@ -47,23 +49,25 @@ def compat_setenv(key, value, env=os.environ):
|
|||||||
env[key] = value
|
env[key] = value
|
||||||
|
|
||||||
|
|
||||||
|
compat_base64_b64decode = base64.b64decode
|
||||||
compat_basestring = str
|
compat_basestring = str
|
||||||
compat_casefold = str.casefold
|
compat_casefold = str.casefold
|
||||||
compat_chr = chr
|
compat_chr = chr
|
||||||
compat_collections_abc = collections.abc
|
compat_collections_abc = collections.abc
|
||||||
compat_cookiejar = http.cookiejar
|
compat_cookiejar = compat_http_cookiejar = http.cookiejar
|
||||||
compat_cookiejar_Cookie = http.cookiejar.Cookie
|
compat_cookiejar_Cookie = compat_http_cookiejar_Cookie = http.cookiejar.Cookie
|
||||||
compat_cookies = http.cookies
|
compat_cookies = compat_http_cookies = http.cookies
|
||||||
compat_cookies_SimpleCookie = http.cookies.SimpleCookie
|
compat_cookies_SimpleCookie = compat_http_cookies_SimpleCookie = http.cookies.SimpleCookie
|
||||||
compat_etree_Element = etree.Element
|
compat_etree_Element = compat_xml_etree_ElementTree_Element = etree.Element
|
||||||
compat_etree_register_namespace = etree.register_namespace
|
compat_etree_register_namespace = compat_xml_etree_register_namespace = etree.register_namespace
|
||||||
compat_filter = filter
|
compat_filter = filter
|
||||||
compat_get_terminal_size = shutil.get_terminal_size
|
compat_get_terminal_size = shutil.get_terminal_size
|
||||||
compat_getenv = os.getenv
|
compat_getenv = os.getenv
|
||||||
compat_getpass = getpass.getpass
|
compat_getpass = compat_getpass_getpass = getpass.getpass
|
||||||
compat_html_entities = html.entities
|
compat_html_entities = html.entities
|
||||||
compat_html_entities_html5 = html.entities.html5
|
compat_html_entities_html5 = html.entities.html5
|
||||||
compat_HTMLParser = html.parser.HTMLParser
|
compat_html_parser_HTMLParseError = compat_HTMLParseError
|
||||||
|
compat_HTMLParser = compat_html_parser_HTMLParser = html.parser.HTMLParser
|
||||||
compat_http_client = http.client
|
compat_http_client = http.client
|
||||||
compat_http_server = http.server
|
compat_http_server = http.server
|
||||||
compat_input = input
|
compat_input = input
|
||||||
@ -72,6 +76,8 @@ def compat_setenv(key, value, env=os.environ):
|
|||||||
compat_kwargs = lambda kwargs: kwargs
|
compat_kwargs = lambda kwargs: kwargs
|
||||||
compat_map = map
|
compat_map = map
|
||||||
compat_numeric_types = (int, float, complex)
|
compat_numeric_types = (int, float, complex)
|
||||||
|
compat_os_path_expanduser = compat_expanduser
|
||||||
|
compat_os_path_realpath = compat_realpath
|
||||||
compat_print = print
|
compat_print = print
|
||||||
compat_shlex_split = shlex.split
|
compat_shlex_split = shlex.split
|
||||||
compat_socket_create_connection = socket.create_connection
|
compat_socket_create_connection = socket.create_connection
|
||||||
@ -81,7 +87,9 @@ def compat_setenv(key, value, env=os.environ):
|
|||||||
compat_subprocess_get_DEVNULL = lambda: DEVNULL
|
compat_subprocess_get_DEVNULL = lambda: DEVNULL
|
||||||
compat_tokenize_tokenize = tokenize.tokenize
|
compat_tokenize_tokenize = tokenize.tokenize
|
||||||
compat_urllib_error = urllib.error
|
compat_urllib_error = urllib.error
|
||||||
|
compat_urllib_HTTPError = urllib.error.HTTPError
|
||||||
compat_urllib_parse = urllib.parse
|
compat_urllib_parse = urllib.parse
|
||||||
|
compat_urllib_parse_parse_qs = urllib.parse.parse_qs
|
||||||
compat_urllib_parse_quote = urllib.parse.quote
|
compat_urllib_parse_quote = urllib.parse.quote
|
||||||
compat_urllib_parse_quote_plus = urllib.parse.quote_plus
|
compat_urllib_parse_quote_plus = urllib.parse.quote_plus
|
||||||
compat_urllib_parse_unquote_plus = urllib.parse.unquote_plus
|
compat_urllib_parse_unquote_plus = urllib.parse.unquote_plus
|
||||||
@ -90,8 +98,10 @@ def compat_setenv(key, value, env=os.environ):
|
|||||||
compat_urllib_request = urllib.request
|
compat_urllib_request = urllib.request
|
||||||
compat_urllib_request_DataHandler = urllib.request.DataHandler
|
compat_urllib_request_DataHandler = urllib.request.DataHandler
|
||||||
compat_urllib_response = urllib.response
|
compat_urllib_response = urllib.response
|
||||||
compat_urlretrieve = urllib.request.urlretrieve
|
compat_urlretrieve = compat_urllib_request_urlretrieve = urllib.request.urlretrieve
|
||||||
compat_xml_parse_error = etree.ParseError
|
compat_xml_parse_error = compat_xml_etree_ElementTree_ParseError = etree.ParseError
|
||||||
compat_xpath = lambda xpath: xpath
|
compat_xpath = lambda xpath: xpath
|
||||||
compat_zip = zip
|
compat_zip = zip
|
||||||
workaround_optparse_bug9161 = lambda: None
|
workaround_optparse_bug9161 = lambda: None
|
||||||
|
|
||||||
|
legacy = []
|
||||||
|
@ -1,5 +1,6 @@
|
|||||||
import collections
|
import collections
|
||||||
import contextlib
|
import contextlib
|
||||||
|
import functools
|
||||||
import importlib
|
import importlib
|
||||||
import sys
|
import sys
|
||||||
import types
|
import types
|
||||||
@ -10,61 +11,73 @@
|
|||||||
|
|
||||||
|
|
||||||
def get_package_info(module):
|
def get_package_info(module):
|
||||||
parent = module.__name__.split('.')[0]
|
return _Package(
|
||||||
parent_module = None
|
name=getattr(module, '_yt_dlp__identifier', module.__name__),
|
||||||
with contextlib.suppress(ImportError):
|
version=str(next(filter(None, (
|
||||||
parent_module = importlib.import_module(parent)
|
getattr(module, attr, None)
|
||||||
|
for attr in ('__version__', 'version_string', 'version')
|
||||||
for attr in ('__version__', 'version_string', 'version'):
|
)), None)))
|
||||||
version = getattr(parent_module, attr, None)
|
|
||||||
if version is not None:
|
|
||||||
break
|
|
||||||
return _Package(getattr(module, '_yt_dlp__identifier', parent), str(version))
|
|
||||||
|
|
||||||
|
|
||||||
def _is_package(module):
|
def _is_package(module):
|
||||||
try:
|
return '__path__' in vars(module)
|
||||||
module.__getattribute__('__path__')
|
|
||||||
except AttributeError:
|
|
||||||
return False
|
|
||||||
return True
|
|
||||||
|
|
||||||
|
|
||||||
def passthrough_module(parent, child, allowed_attributes=None, *, callback=lambda _: None):
|
def _is_dunder(name):
|
||||||
parent_module = importlib.import_module(parent)
|
return name.startswith('__') and name.endswith('__')
|
||||||
child_module = None # Import child module only as needed
|
|
||||||
|
|
||||||
class PassthroughModule(types.ModuleType):
|
|
||||||
def __getattr__(self, attr):
|
|
||||||
if _is_package(parent_module):
|
|
||||||
with contextlib.suppress(ImportError):
|
|
||||||
return importlib.import_module(f'.{attr}', parent)
|
|
||||||
|
|
||||||
ret = self.__from_child(attr)
|
class EnhancedModule(types.ModuleType):
|
||||||
if ret is _NO_ATTRIBUTE:
|
def __bool__(self):
|
||||||
raise AttributeError(f'module {parent} has no attribute {attr}')
|
return vars(self).get('__bool__', lambda: True)()
|
||||||
callback(attr)
|
|
||||||
return ret
|
|
||||||
|
|
||||||
def __from_child(self, attr):
|
def __getattribute__(self, attr):
|
||||||
if allowed_attributes is None:
|
try:
|
||||||
if attr.startswith('__') and attr.endswith('__'):
|
ret = super().__getattribute__(attr)
|
||||||
return _NO_ATTRIBUTE
|
except AttributeError:
|
||||||
elif attr not in allowed_attributes:
|
if _is_dunder(attr):
|
||||||
|
raise
|
||||||
|
getter = getattr(self, '__getattr__', None)
|
||||||
|
if not getter:
|
||||||
|
raise
|
||||||
|
ret = getter(attr)
|
||||||
|
return ret.fget() if isinstance(ret, property) else ret
|
||||||
|
|
||||||
|
|
||||||
|
def passthrough_module(parent, child, allowed_attributes=(..., ), *, callback=lambda _: None):
|
||||||
|
"""Passthrough parent module into a child module, creating the parent if necessary"""
|
||||||
|
def __getattr__(attr):
|
||||||
|
if _is_package(parent):
|
||||||
|
with contextlib.suppress(ModuleNotFoundError):
|
||||||
|
return importlib.import_module(f'.{attr}', parent.__name__)
|
||||||
|
|
||||||
|
ret = from_child(attr)
|
||||||
|
if ret is _NO_ATTRIBUTE:
|
||||||
|
raise AttributeError(f'module {parent.__name__} has no attribute {attr}')
|
||||||
|
callback(attr)
|
||||||
|
return ret
|
||||||
|
|
||||||
|
@functools.lru_cache(maxsize=None)
|
||||||
|
def from_child(attr):
|
||||||
|
nonlocal child
|
||||||
|
if attr not in allowed_attributes:
|
||||||
|
if ... not in allowed_attributes or _is_dunder(attr):
|
||||||
return _NO_ATTRIBUTE
|
return _NO_ATTRIBUTE
|
||||||
|
|
||||||
nonlocal child_module
|
if isinstance(child, str):
|
||||||
child_module = child_module or importlib.import_module(child, parent)
|
child = importlib.import_module(child, parent.__name__)
|
||||||
|
|
||||||
with contextlib.suppress(AttributeError):
|
if _is_package(child):
|
||||||
return getattr(child_module, attr)
|
with contextlib.suppress(ImportError):
|
||||||
|
return passthrough_module(f'{parent.__name__}.{attr}',
|
||||||
|
importlib.import_module(f'.{attr}', child.__name__))
|
||||||
|
|
||||||
if _is_package(child_module):
|
with contextlib.suppress(AttributeError):
|
||||||
with contextlib.suppress(ImportError):
|
return getattr(child, attr)
|
||||||
return importlib.import_module(f'.{attr}', child)
|
|
||||||
|
|
||||||
return _NO_ATTRIBUTE
|
return _NO_ATTRIBUTE
|
||||||
|
|
||||||
# Python 3.6 does not have module level __getattr__
|
parent = sys.modules.get(parent, types.ModuleType(parent))
|
||||||
# https://peps.python.org/pep-0562/
|
parent.__class__ = EnhancedModule
|
||||||
sys.modules[parent].__class__ = PassthroughModule
|
parent.__getattr__ = __getattr__
|
||||||
|
return parent
|
||||||
|
7
yt_dlp/compat/urllib/__init__.py
Normal file
7
yt_dlp/compat/urllib/__init__.py
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
# flake8: noqa: F405
|
||||||
|
from urllib import * # noqa: F403
|
||||||
|
|
||||||
|
from ..compat_utils import passthrough_module
|
||||||
|
|
||||||
|
passthrough_module(__name__, 'urllib')
|
||||||
|
del passthrough_module
|
40
yt_dlp/compat/urllib/request.py
Normal file
40
yt_dlp/compat/urllib/request.py
Normal file
@ -0,0 +1,40 @@
|
|||||||
|
# flake8: noqa: F405
|
||||||
|
from urllib.request import * # noqa: F403
|
||||||
|
|
||||||
|
from ..compat_utils import passthrough_module
|
||||||
|
|
||||||
|
passthrough_module(__name__, 'urllib.request')
|
||||||
|
del passthrough_module
|
||||||
|
|
||||||
|
|
||||||
|
from .. import compat_os_name
|
||||||
|
|
||||||
|
if compat_os_name == 'nt':
|
||||||
|
# On older python versions, proxies are extracted from Windows registry erroneously. [1]
|
||||||
|
# If the https proxy in the registry does not have a scheme, urllib will incorrectly add https:// to it. [2]
|
||||||
|
# It is unlikely that the user has actually set it to be https, so we should be fine to safely downgrade
|
||||||
|
# it to http on these older python versions to avoid issues
|
||||||
|
# This also applies for ftp proxy type, as ftp:// proxy scheme is not supported.
|
||||||
|
# 1: https://github.com/python/cpython/issues/86793
|
||||||
|
# 2: https://github.com/python/cpython/blob/51f1ae5ceb0673316c4e4b0175384e892e33cc6e/Lib/urllib/request.py#L2683-L2698
|
||||||
|
import sys
|
||||||
|
from urllib.request import getproxies_environment, getproxies_registry
|
||||||
|
|
||||||
|
def getproxies_registry_patched():
|
||||||
|
proxies = getproxies_registry()
|
||||||
|
if (
|
||||||
|
sys.version_info >= (3, 10, 5) # https://docs.python.org/3.10/whatsnew/changelog.html#python-3-10-5-final
|
||||||
|
or (3, 9, 13) <= sys.version_info < (3, 10) # https://docs.python.org/3.9/whatsnew/changelog.html#python-3-9-13-final
|
||||||
|
):
|
||||||
|
return proxies
|
||||||
|
|
||||||
|
for scheme in ('https', 'ftp'):
|
||||||
|
if scheme in proxies and proxies[scheme].startswith(f'{scheme}://'):
|
||||||
|
proxies[scheme] = 'http' + proxies[scheme][len(scheme):]
|
||||||
|
|
||||||
|
return proxies
|
||||||
|
|
||||||
|
def getproxies():
|
||||||
|
return getproxies_environment() or getproxies_registry_patched()
|
||||||
|
|
||||||
|
del compat_os_name
|
@ -1,7 +1,9 @@
|
|||||||
import base64
|
import base64
|
||||||
|
import collections
|
||||||
import contextlib
|
import contextlib
|
||||||
import http.cookiejar
|
import http.cookiejar
|
||||||
import http.cookies
|
import http.cookies
|
||||||
|
import io
|
||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
@ -11,6 +13,7 @@
|
|||||||
import sys
|
import sys
|
||||||
import tempfile
|
import tempfile
|
||||||
import time
|
import time
|
||||||
|
import urllib.request
|
||||||
from datetime import datetime, timedelta, timezone
|
from datetime import datetime, timedelta, timezone
|
||||||
from enum import Enum, auto
|
from enum import Enum, auto
|
||||||
from hashlib import pbkdf2_hmac
|
from hashlib import pbkdf2_hmac
|
||||||
@ -20,6 +23,7 @@
|
|||||||
aes_gcm_decrypt_and_verify_bytes,
|
aes_gcm_decrypt_and_verify_bytes,
|
||||||
unpad_pkcs7,
|
unpad_pkcs7,
|
||||||
)
|
)
|
||||||
|
from .compat import functools
|
||||||
from .dependencies import (
|
from .dependencies import (
|
||||||
_SECRETSTORAGE_UNAVAILABLE_REASON,
|
_SECRETSTORAGE_UNAVAILABLE_REASON,
|
||||||
secretstorage,
|
secretstorage,
|
||||||
@ -28,11 +32,14 @@
|
|||||||
from .minicurses import MultilinePrinter, QuietMultilinePrinter
|
from .minicurses import MultilinePrinter, QuietMultilinePrinter
|
||||||
from .utils import (
|
from .utils import (
|
||||||
Popen,
|
Popen,
|
||||||
YoutubeDLCookieJar,
|
|
||||||
error_to_str,
|
error_to_str,
|
||||||
|
escape_url,
|
||||||
expand_path,
|
expand_path,
|
||||||
is_path_like,
|
is_path_like,
|
||||||
|
sanitize_url,
|
||||||
|
str_or_none,
|
||||||
try_call,
|
try_call,
|
||||||
|
write_string,
|
||||||
)
|
)
|
||||||
|
|
||||||
CHROMIUM_BASED_BROWSERS = {'brave', 'chrome', 'chromium', 'edge', 'opera', 'vivaldi'}
|
CHROMIUM_BASED_BROWSERS = {'brave', 'chrome', 'chromium', 'edge', 'opera', 'vivaldi'}
|
||||||
@ -346,7 +353,9 @@ class ChromeCookieDecryptor:
|
|||||||
Linux:
|
Linux:
|
||||||
- cookies are either v10 or v11
|
- cookies are either v10 or v11
|
||||||
- v10: AES-CBC encrypted with a fixed key
|
- v10: AES-CBC encrypted with a fixed key
|
||||||
|
- also attempts empty password if decryption fails
|
||||||
- v11: AES-CBC encrypted with an OS protected key (keyring)
|
- v11: AES-CBC encrypted with an OS protected key (keyring)
|
||||||
|
- also attempts empty password if decryption fails
|
||||||
- v11 keys can be stored in various places depending on the activate desktop environment [2]
|
- v11 keys can be stored in various places depending on the activate desktop environment [2]
|
||||||
|
|
||||||
Mac:
|
Mac:
|
||||||
@ -361,7 +370,7 @@ class ChromeCookieDecryptor:
|
|||||||
|
|
||||||
Sources:
|
Sources:
|
||||||
- [1] https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/
|
- [1] https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/
|
||||||
- [2] https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/key_storage_linux.cc
|
- [2] https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/sync/key_storage_linux.cc
|
||||||
- KeyStorageLinux::CreateService
|
- KeyStorageLinux::CreateService
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@ -383,32 +392,49 @@ class LinuxChromeCookieDecryptor(ChromeCookieDecryptor):
|
|||||||
def __init__(self, browser_keyring_name, logger, *, keyring=None):
|
def __init__(self, browser_keyring_name, logger, *, keyring=None):
|
||||||
self._logger = logger
|
self._logger = logger
|
||||||
self._v10_key = self.derive_key(b'peanuts')
|
self._v10_key = self.derive_key(b'peanuts')
|
||||||
password = _get_linux_keyring_password(browser_keyring_name, keyring, logger)
|
self._empty_key = self.derive_key(b'')
|
||||||
self._v11_key = None if password is None else self.derive_key(password)
|
|
||||||
self._cookie_counts = {'v10': 0, 'v11': 0, 'other': 0}
|
self._cookie_counts = {'v10': 0, 'v11': 0, 'other': 0}
|
||||||
|
self._browser_keyring_name = browser_keyring_name
|
||||||
|
self._keyring = keyring
|
||||||
|
|
||||||
|
@functools.cached_property
|
||||||
|
def _v11_key(self):
|
||||||
|
password = _get_linux_keyring_password(self._browser_keyring_name, self._keyring, self._logger)
|
||||||
|
return None if password is None else self.derive_key(password)
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def derive_key(password):
|
def derive_key(password):
|
||||||
# values from
|
# values from
|
||||||
# https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/os_crypt_linux.cc
|
# https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/sync/os_crypt_linux.cc
|
||||||
return pbkdf2_sha1(password, salt=b'saltysalt', iterations=1, key_length=16)
|
return pbkdf2_sha1(password, salt=b'saltysalt', iterations=1, key_length=16)
|
||||||
|
|
||||||
def decrypt(self, encrypted_value):
|
def decrypt(self, encrypted_value):
|
||||||
|
"""
|
||||||
|
|
||||||
|
following the same approach as the fix in [1]: if cookies fail to decrypt then attempt to decrypt
|
||||||
|
with an empty password. The failure detection is not the same as what chromium uses so the
|
||||||
|
results won't be perfect
|
||||||
|
|
||||||
|
References:
|
||||||
|
- [1] https://chromium.googlesource.com/chromium/src/+/bbd54702284caca1f92d656fdcadf2ccca6f4165%5E%21/
|
||||||
|
- a bugfix to try an empty password as a fallback
|
||||||
|
"""
|
||||||
version = encrypted_value[:3]
|
version = encrypted_value[:3]
|
||||||
ciphertext = encrypted_value[3:]
|
ciphertext = encrypted_value[3:]
|
||||||
|
|
||||||
if version == b'v10':
|
if version == b'v10':
|
||||||
self._cookie_counts['v10'] += 1
|
self._cookie_counts['v10'] += 1
|
||||||
return _decrypt_aes_cbc(ciphertext, self._v10_key, self._logger)
|
return _decrypt_aes_cbc_multi(ciphertext, (self._v10_key, self._empty_key), self._logger)
|
||||||
|
|
||||||
elif version == b'v11':
|
elif version == b'v11':
|
||||||
self._cookie_counts['v11'] += 1
|
self._cookie_counts['v11'] += 1
|
||||||
if self._v11_key is None:
|
if self._v11_key is None:
|
||||||
self._logger.warning('cannot decrypt v11 cookies: no key found', only_once=True)
|
self._logger.warning('cannot decrypt v11 cookies: no key found', only_once=True)
|
||||||
return None
|
return None
|
||||||
return _decrypt_aes_cbc(ciphertext, self._v11_key, self._logger)
|
return _decrypt_aes_cbc_multi(ciphertext, (self._v11_key, self._empty_key), self._logger)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
|
self._logger.warning(f'unknown cookie version: "{version}"', only_once=True)
|
||||||
self._cookie_counts['other'] += 1
|
self._cookie_counts['other'] += 1
|
||||||
return None
|
return None
|
||||||
|
|
||||||
@ -423,7 +449,7 @@ def __init__(self, browser_keyring_name, logger):
|
|||||||
@staticmethod
|
@staticmethod
|
||||||
def derive_key(password):
|
def derive_key(password):
|
||||||
# values from
|
# values from
|
||||||
# https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/os_crypt_mac.mm
|
# https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/sync/os_crypt_mac.mm
|
||||||
return pbkdf2_sha1(password, salt=b'saltysalt', iterations=1003, key_length=16)
|
return pbkdf2_sha1(password, salt=b'saltysalt', iterations=1003, key_length=16)
|
||||||
|
|
||||||
def decrypt(self, encrypted_value):
|
def decrypt(self, encrypted_value):
|
||||||
@ -436,12 +462,12 @@ def decrypt(self, encrypted_value):
|
|||||||
self._logger.warning('cannot decrypt v10 cookies: no key found', only_once=True)
|
self._logger.warning('cannot decrypt v10 cookies: no key found', only_once=True)
|
||||||
return None
|
return None
|
||||||
|
|
||||||
return _decrypt_aes_cbc(ciphertext, self._v10_key, self._logger)
|
return _decrypt_aes_cbc_multi(ciphertext, (self._v10_key,), self._logger)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
self._cookie_counts['other'] += 1
|
self._cookie_counts['other'] += 1
|
||||||
# other prefixes are considered 'old data' which were stored as plaintext
|
# other prefixes are considered 'old data' which were stored as plaintext
|
||||||
# https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/os_crypt_mac.mm
|
# https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/sync/os_crypt_mac.mm
|
||||||
return encrypted_value
|
return encrypted_value
|
||||||
|
|
||||||
|
|
||||||
@ -461,7 +487,7 @@ def decrypt(self, encrypted_value):
|
|||||||
self._logger.warning('cannot decrypt v10 cookies: no key found', only_once=True)
|
self._logger.warning('cannot decrypt v10 cookies: no key found', only_once=True)
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/os_crypt_win.cc
|
# https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/sync/os_crypt_win.cc
|
||||||
# kNonceLength
|
# kNonceLength
|
||||||
nonce_length = 96 // 8
|
nonce_length = 96 // 8
|
||||||
# boringssl
|
# boringssl
|
||||||
@ -478,23 +504,27 @@ def decrypt(self, encrypted_value):
|
|||||||
else:
|
else:
|
||||||
self._cookie_counts['other'] += 1
|
self._cookie_counts['other'] += 1
|
||||||
# any other prefix means the data is DPAPI encrypted
|
# any other prefix means the data is DPAPI encrypted
|
||||||
# https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/os_crypt_win.cc
|
# https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/sync/os_crypt_win.cc
|
||||||
return _decrypt_windows_dpapi(encrypted_value, self._logger).decode()
|
return _decrypt_windows_dpapi(encrypted_value, self._logger).decode()
|
||||||
|
|
||||||
|
|
||||||
def _extract_safari_cookies(profile, logger):
|
def _extract_safari_cookies(profile, logger):
|
||||||
if profile is not None:
|
|
||||||
logger.error('safari does not support profiles')
|
|
||||||
if sys.platform != 'darwin':
|
if sys.platform != 'darwin':
|
||||||
raise ValueError(f'unsupported platform: {sys.platform}')
|
raise ValueError(f'unsupported platform: {sys.platform}')
|
||||||
|
|
||||||
cookies_path = os.path.expanduser('~/Library/Cookies/Cookies.binarycookies')
|
if profile:
|
||||||
|
cookies_path = os.path.expanduser(profile)
|
||||||
if not os.path.isfile(cookies_path):
|
|
||||||
logger.debug('Trying secondary cookie location')
|
|
||||||
cookies_path = os.path.expanduser('~/Library/Containers/com.apple.Safari/Data/Library/Cookies/Cookies.binarycookies')
|
|
||||||
if not os.path.isfile(cookies_path):
|
if not os.path.isfile(cookies_path):
|
||||||
raise FileNotFoundError('could not find safari cookies database')
|
raise FileNotFoundError('custom safari cookies database not found')
|
||||||
|
|
||||||
|
else:
|
||||||
|
cookies_path = os.path.expanduser('~/Library/Cookies/Cookies.binarycookies')
|
||||||
|
|
||||||
|
if not os.path.isfile(cookies_path):
|
||||||
|
logger.debug('Trying secondary cookie location')
|
||||||
|
cookies_path = os.path.expanduser('~/Library/Containers/com.apple.Safari/Data/Library/Cookies/Cookies.binarycookies')
|
||||||
|
if not os.path.isfile(cookies_path):
|
||||||
|
raise FileNotFoundError('could not find safari cookies database')
|
||||||
|
|
||||||
with open(cookies_path, 'rb') as f:
|
with open(cookies_path, 'rb') as f:
|
||||||
cookies_data = f.read()
|
cookies_data = f.read()
|
||||||
@ -657,19 +687,27 @@ class _LinuxDesktopEnvironment(Enum):
|
|||||||
"""
|
"""
|
||||||
OTHER = auto()
|
OTHER = auto()
|
||||||
CINNAMON = auto()
|
CINNAMON = auto()
|
||||||
|
DEEPIN = auto()
|
||||||
GNOME = auto()
|
GNOME = auto()
|
||||||
KDE = auto()
|
KDE3 = auto()
|
||||||
|
KDE4 = auto()
|
||||||
|
KDE5 = auto()
|
||||||
|
KDE6 = auto()
|
||||||
PANTHEON = auto()
|
PANTHEON = auto()
|
||||||
|
UKUI = auto()
|
||||||
UNITY = auto()
|
UNITY = auto()
|
||||||
XFCE = auto()
|
XFCE = auto()
|
||||||
|
LXQT = auto()
|
||||||
|
|
||||||
|
|
||||||
class _LinuxKeyring(Enum):
|
class _LinuxKeyring(Enum):
|
||||||
"""
|
"""
|
||||||
https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/key_storage_util_linux.h
|
https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/sync/key_storage_util_linux.h
|
||||||
SelectedLinuxBackend
|
SelectedLinuxBackend
|
||||||
"""
|
"""
|
||||||
KWALLET = auto()
|
KWALLET = auto() # KDE4
|
||||||
|
KWALLET5 = auto()
|
||||||
|
KWALLET6 = auto()
|
||||||
GNOMEKEYRING = auto()
|
GNOMEKEYRING = auto()
|
||||||
BASICTEXT = auto()
|
BASICTEXT = auto()
|
||||||
|
|
||||||
@ -677,7 +715,7 @@ class _LinuxKeyring(Enum):
|
|||||||
SUPPORTED_KEYRINGS = _LinuxKeyring.__members__.keys()
|
SUPPORTED_KEYRINGS = _LinuxKeyring.__members__.keys()
|
||||||
|
|
||||||
|
|
||||||
def _get_linux_desktop_environment(env):
|
def _get_linux_desktop_environment(env, logger):
|
||||||
"""
|
"""
|
||||||
https://chromium.googlesource.com/chromium/src/+/refs/heads/main/base/nix/xdg_util.cc
|
https://chromium.googlesource.com/chromium/src/+/refs/heads/main/base/nix/xdg_util.cc
|
||||||
GetDesktopEnvironment
|
GetDesktopEnvironment
|
||||||
@ -692,51 +730,97 @@ def _get_linux_desktop_environment(env):
|
|||||||
return _LinuxDesktopEnvironment.GNOME
|
return _LinuxDesktopEnvironment.GNOME
|
||||||
else:
|
else:
|
||||||
return _LinuxDesktopEnvironment.UNITY
|
return _LinuxDesktopEnvironment.UNITY
|
||||||
|
elif xdg_current_desktop == 'Deepin':
|
||||||
|
return _LinuxDesktopEnvironment.DEEPIN
|
||||||
elif xdg_current_desktop == 'GNOME':
|
elif xdg_current_desktop == 'GNOME':
|
||||||
return _LinuxDesktopEnvironment.GNOME
|
return _LinuxDesktopEnvironment.GNOME
|
||||||
elif xdg_current_desktop == 'X-Cinnamon':
|
elif xdg_current_desktop == 'X-Cinnamon':
|
||||||
return _LinuxDesktopEnvironment.CINNAMON
|
return _LinuxDesktopEnvironment.CINNAMON
|
||||||
elif xdg_current_desktop == 'KDE':
|
elif xdg_current_desktop == 'KDE':
|
||||||
return _LinuxDesktopEnvironment.KDE
|
kde_version = env.get('KDE_SESSION_VERSION', None)
|
||||||
|
if kde_version == '5':
|
||||||
|
return _LinuxDesktopEnvironment.KDE5
|
||||||
|
elif kde_version == '6':
|
||||||
|
return _LinuxDesktopEnvironment.KDE6
|
||||||
|
elif kde_version == '4':
|
||||||
|
return _LinuxDesktopEnvironment.KDE4
|
||||||
|
else:
|
||||||
|
logger.info(f'unknown KDE version: "{kde_version}". Assuming KDE4')
|
||||||
|
return _LinuxDesktopEnvironment.KDE4
|
||||||
elif xdg_current_desktop == 'Pantheon':
|
elif xdg_current_desktop == 'Pantheon':
|
||||||
return _LinuxDesktopEnvironment.PANTHEON
|
return _LinuxDesktopEnvironment.PANTHEON
|
||||||
elif xdg_current_desktop == 'XFCE':
|
elif xdg_current_desktop == 'XFCE':
|
||||||
return _LinuxDesktopEnvironment.XFCE
|
return _LinuxDesktopEnvironment.XFCE
|
||||||
|
elif xdg_current_desktop == 'UKUI':
|
||||||
|
return _LinuxDesktopEnvironment.UKUI
|
||||||
|
elif xdg_current_desktop == 'LXQt':
|
||||||
|
return _LinuxDesktopEnvironment.LXQT
|
||||||
|
else:
|
||||||
|
logger.info(f'XDG_CURRENT_DESKTOP is set to an unknown value: "{xdg_current_desktop}"')
|
||||||
|
|
||||||
elif desktop_session is not None:
|
elif desktop_session is not None:
|
||||||
if desktop_session in ('mate', 'gnome'):
|
if desktop_session == 'deepin':
|
||||||
|
return _LinuxDesktopEnvironment.DEEPIN
|
||||||
|
elif desktop_session in ('mate', 'gnome'):
|
||||||
return _LinuxDesktopEnvironment.GNOME
|
return _LinuxDesktopEnvironment.GNOME
|
||||||
elif 'kde' in desktop_session:
|
elif desktop_session in ('kde4', 'kde-plasma'):
|
||||||
return _LinuxDesktopEnvironment.KDE
|
return _LinuxDesktopEnvironment.KDE4
|
||||||
elif 'xfce' in desktop_session:
|
elif desktop_session == 'kde':
|
||||||
|
if 'KDE_SESSION_VERSION' in env:
|
||||||
|
return _LinuxDesktopEnvironment.KDE4
|
||||||
|
else:
|
||||||
|
return _LinuxDesktopEnvironment.KDE3
|
||||||
|
elif 'xfce' in desktop_session or desktop_session == 'xubuntu':
|
||||||
return _LinuxDesktopEnvironment.XFCE
|
return _LinuxDesktopEnvironment.XFCE
|
||||||
|
elif desktop_session == 'ukui':
|
||||||
|
return _LinuxDesktopEnvironment.UKUI
|
||||||
|
else:
|
||||||
|
logger.info(f'DESKTOP_SESSION is set to an unknown value: "{desktop_session}"')
|
||||||
|
|
||||||
else:
|
else:
|
||||||
if 'GNOME_DESKTOP_SESSION_ID' in env:
|
if 'GNOME_DESKTOP_SESSION_ID' in env:
|
||||||
return _LinuxDesktopEnvironment.GNOME
|
return _LinuxDesktopEnvironment.GNOME
|
||||||
elif 'KDE_FULL_SESSION' in env:
|
elif 'KDE_FULL_SESSION' in env:
|
||||||
return _LinuxDesktopEnvironment.KDE
|
if 'KDE_SESSION_VERSION' in env:
|
||||||
|
return _LinuxDesktopEnvironment.KDE4
|
||||||
|
else:
|
||||||
|
return _LinuxDesktopEnvironment.KDE3
|
||||||
return _LinuxDesktopEnvironment.OTHER
|
return _LinuxDesktopEnvironment.OTHER
|
||||||
|
|
||||||
|
|
||||||
def _choose_linux_keyring(logger):
|
def _choose_linux_keyring(logger):
|
||||||
"""
|
"""
|
||||||
https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/key_storage_util_linux.cc
|
SelectBackend in [1]
|
||||||
SelectBackend
|
|
||||||
|
There is currently support for forcing chromium to use BASIC_TEXT by creating a file called
|
||||||
|
`Disable Local Encryption` [1] in the user data dir. The function to write this file (`WriteBackendUse()` [1])
|
||||||
|
does not appear to be called anywhere other than in tests, so the user would have to create this file manually
|
||||||
|
and so would be aware enough to tell yt-dlp to use the BASIC_TEXT keyring.
|
||||||
|
|
||||||
|
References:
|
||||||
|
- [1] https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/sync/key_storage_util_linux.cc
|
||||||
"""
|
"""
|
||||||
desktop_environment = _get_linux_desktop_environment(os.environ)
|
desktop_environment = _get_linux_desktop_environment(os.environ, logger)
|
||||||
logger.debug(f'detected desktop environment: {desktop_environment.name}')
|
logger.debug(f'detected desktop environment: {desktop_environment.name}')
|
||||||
if desktop_environment == _LinuxDesktopEnvironment.KDE:
|
if desktop_environment == _LinuxDesktopEnvironment.KDE4:
|
||||||
linux_keyring = _LinuxKeyring.KWALLET
|
linux_keyring = _LinuxKeyring.KWALLET
|
||||||
elif desktop_environment == _LinuxDesktopEnvironment.OTHER:
|
elif desktop_environment == _LinuxDesktopEnvironment.KDE5:
|
||||||
|
linux_keyring = _LinuxKeyring.KWALLET5
|
||||||
|
elif desktop_environment == _LinuxDesktopEnvironment.KDE6:
|
||||||
|
linux_keyring = _LinuxKeyring.KWALLET6
|
||||||
|
elif desktop_environment in (
|
||||||
|
_LinuxDesktopEnvironment.KDE3, _LinuxDesktopEnvironment.LXQT, _LinuxDesktopEnvironment.OTHER
|
||||||
|
):
|
||||||
linux_keyring = _LinuxKeyring.BASICTEXT
|
linux_keyring = _LinuxKeyring.BASICTEXT
|
||||||
else:
|
else:
|
||||||
linux_keyring = _LinuxKeyring.GNOMEKEYRING
|
linux_keyring = _LinuxKeyring.GNOMEKEYRING
|
||||||
return linux_keyring
|
return linux_keyring
|
||||||
|
|
||||||
|
|
||||||
def _get_kwallet_network_wallet(logger):
|
def _get_kwallet_network_wallet(keyring, logger):
|
||||||
""" The name of the wallet used to store network passwords.
|
""" The name of the wallet used to store network passwords.
|
||||||
|
|
||||||
https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/kwallet_dbus.cc
|
https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/sync/kwallet_dbus.cc
|
||||||
KWalletDBus::NetworkWallet
|
KWalletDBus::NetworkWallet
|
||||||
which does a dbus call to the following function:
|
which does a dbus call to the following function:
|
||||||
https://api.kde.org/frameworks/kwallet/html/classKWallet_1_1Wallet.html
|
https://api.kde.org/frameworks/kwallet/html/classKWallet_1_1Wallet.html
|
||||||
@ -744,10 +828,22 @@ def _get_kwallet_network_wallet(logger):
|
|||||||
"""
|
"""
|
||||||
default_wallet = 'kdewallet'
|
default_wallet = 'kdewallet'
|
||||||
try:
|
try:
|
||||||
|
if keyring == _LinuxKeyring.KWALLET:
|
||||||
|
service_name = 'org.kde.kwalletd'
|
||||||
|
wallet_path = '/modules/kwalletd'
|
||||||
|
elif keyring == _LinuxKeyring.KWALLET5:
|
||||||
|
service_name = 'org.kde.kwalletd5'
|
||||||
|
wallet_path = '/modules/kwalletd5'
|
||||||
|
elif keyring == _LinuxKeyring.KWALLET6:
|
||||||
|
service_name = 'org.kde.kwalletd6'
|
||||||
|
wallet_path = '/modules/kwalletd6'
|
||||||
|
else:
|
||||||
|
raise ValueError(keyring)
|
||||||
|
|
||||||
stdout, _, returncode = Popen.run([
|
stdout, _, returncode = Popen.run([
|
||||||
'dbus-send', '--session', '--print-reply=literal',
|
'dbus-send', '--session', '--print-reply=literal',
|
||||||
'--dest=org.kde.kwalletd5',
|
f'--dest={service_name}',
|
||||||
'/modules/kwalletd5',
|
wallet_path,
|
||||||
'org.kde.KWallet.networkWallet'
|
'org.kde.KWallet.networkWallet'
|
||||||
], text=True, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)
|
], text=True, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)
|
||||||
|
|
||||||
@ -762,8 +858,8 @@ def _get_kwallet_network_wallet(logger):
|
|||||||
return default_wallet
|
return default_wallet
|
||||||
|
|
||||||
|
|
||||||
def _get_kwallet_password(browser_keyring_name, logger):
|
def _get_kwallet_password(browser_keyring_name, keyring, logger):
|
||||||
logger.debug('using kwallet-query to obtain password from kwallet')
|
logger.debug(f'using kwallet-query to obtain password from {keyring.name}')
|
||||||
|
|
||||||
if shutil.which('kwallet-query') is None:
|
if shutil.which('kwallet-query') is None:
|
||||||
logger.error('kwallet-query command not found. KWallet and kwallet-query '
|
logger.error('kwallet-query command not found. KWallet and kwallet-query '
|
||||||
@ -771,7 +867,7 @@ def _get_kwallet_password(browser_keyring_name, logger):
|
|||||||
'included in the kwallet package for your distribution')
|
'included in the kwallet package for your distribution')
|
||||||
return b''
|
return b''
|
||||||
|
|
||||||
network_wallet = _get_kwallet_network_wallet(logger)
|
network_wallet = _get_kwallet_network_wallet(keyring, logger)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
stdout, _, returncode = Popen.run([
|
stdout, _, returncode = Popen.run([
|
||||||
@ -793,8 +889,9 @@ def _get_kwallet_password(browser_keyring_name, logger):
|
|||||||
# checks hasEntry. To verify this:
|
# checks hasEntry. To verify this:
|
||||||
# dbus-monitor "interface='org.kde.KWallet'" "type=method_return"
|
# dbus-monitor "interface='org.kde.KWallet'" "type=method_return"
|
||||||
# while starting chrome.
|
# while starting chrome.
|
||||||
# this may be a bug as the intended behaviour is to generate a random password and store
|
# this was identified as a bug later and fixed in
|
||||||
# it, but that doesn't matter here.
|
# https://chromium.googlesource.com/chromium/src/+/bbd54702284caca1f92d656fdcadf2ccca6f4165%5E%21/#F0
|
||||||
|
# https://chromium.googlesource.com/chromium/src/+/5463af3c39d7f5b6d11db7fbd51e38cc1974d764
|
||||||
return b''
|
return b''
|
||||||
else:
|
else:
|
||||||
logger.debug('password found')
|
logger.debug('password found')
|
||||||
@ -832,8 +929,8 @@ def _get_linux_keyring_password(browser_keyring_name, keyring, logger):
|
|||||||
keyring = _LinuxKeyring[keyring] if keyring else _choose_linux_keyring(logger)
|
keyring = _LinuxKeyring[keyring] if keyring else _choose_linux_keyring(logger)
|
||||||
logger.debug(f'Chosen keyring: {keyring.name}')
|
logger.debug(f'Chosen keyring: {keyring.name}')
|
||||||
|
|
||||||
if keyring == _LinuxKeyring.KWALLET:
|
if keyring in (_LinuxKeyring.KWALLET, _LinuxKeyring.KWALLET5, _LinuxKeyring.KWALLET6):
|
||||||
return _get_kwallet_password(browser_keyring_name, logger)
|
return _get_kwallet_password(browser_keyring_name, keyring, logger)
|
||||||
elif keyring == _LinuxKeyring.GNOMEKEYRING:
|
elif keyring == _LinuxKeyring.GNOMEKEYRING:
|
||||||
return _get_gnome_keyring_password(browser_keyring_name, logger)
|
return _get_gnome_keyring_password(browser_keyring_name, logger)
|
||||||
elif keyring == _LinuxKeyring.BASICTEXT:
|
elif keyring == _LinuxKeyring.BASICTEXT:
|
||||||
@ -861,6 +958,10 @@ def _get_mac_keyring_password(browser_keyring_name, logger):
|
|||||||
|
|
||||||
|
|
||||||
def _get_windows_v10_key(browser_root, logger):
|
def _get_windows_v10_key(browser_root, logger):
|
||||||
|
"""
|
||||||
|
References:
|
||||||
|
- [1] https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/sync/os_crypt_win.cc
|
||||||
|
"""
|
||||||
path = _find_most_recently_used_file(browser_root, 'Local State', logger)
|
path = _find_most_recently_used_file(browser_root, 'Local State', logger)
|
||||||
if path is None:
|
if path is None:
|
||||||
logger.error('could not find local state file')
|
logger.error('could not find local state file')
|
||||||
@ -869,11 +970,13 @@ def _get_windows_v10_key(browser_root, logger):
|
|||||||
with open(path, encoding='utf8') as f:
|
with open(path, encoding='utf8') as f:
|
||||||
data = json.load(f)
|
data = json.load(f)
|
||||||
try:
|
try:
|
||||||
|
# kOsCryptEncryptedKeyPrefName in [1]
|
||||||
base64_key = data['os_crypt']['encrypted_key']
|
base64_key = data['os_crypt']['encrypted_key']
|
||||||
except KeyError:
|
except KeyError:
|
||||||
logger.error('no encrypted key in Local State')
|
logger.error('no encrypted key in Local State')
|
||||||
return None
|
return None
|
||||||
encrypted_key = base64.b64decode(base64_key)
|
encrypted_key = base64.b64decode(base64_key)
|
||||||
|
# kDPAPIKeyPrefix in [1]
|
||||||
prefix = b'DPAPI'
|
prefix = b'DPAPI'
|
||||||
if not encrypted_key.startswith(prefix):
|
if not encrypted_key.startswith(prefix):
|
||||||
logger.error('invalid key')
|
logger.error('invalid key')
|
||||||
@ -885,13 +988,15 @@ def pbkdf2_sha1(password, salt, iterations, key_length):
|
|||||||
return pbkdf2_hmac('sha1', password, salt, iterations, key_length)
|
return pbkdf2_hmac('sha1', password, salt, iterations, key_length)
|
||||||
|
|
||||||
|
|
||||||
def _decrypt_aes_cbc(ciphertext, key, logger, initialization_vector=b' ' * 16):
|
def _decrypt_aes_cbc_multi(ciphertext, keys, logger, initialization_vector=b' ' * 16):
|
||||||
plaintext = unpad_pkcs7(aes_cbc_decrypt_bytes(ciphertext, key, initialization_vector))
|
for key in keys:
|
||||||
try:
|
plaintext = unpad_pkcs7(aes_cbc_decrypt_bytes(ciphertext, key, initialization_vector))
|
||||||
return plaintext.decode()
|
try:
|
||||||
except UnicodeDecodeError:
|
return plaintext.decode()
|
||||||
logger.warning('failed to decrypt cookie (AES-CBC) because UTF-8 decoding failed. Possibly the key is wrong?', only_once=True)
|
except UnicodeDecodeError:
|
||||||
return None
|
pass
|
||||||
|
logger.warning('failed to decrypt cookie (AES-CBC) because UTF-8 decoding failed. Possibly the key is wrong?', only_once=True)
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
def _decrypt_aes_gcm(ciphertext, key, nonce, authentication_tag, logger):
|
def _decrypt_aes_gcm(ciphertext, key, nonce, authentication_tag, logger):
|
||||||
@ -1085,3 +1190,143 @@ def load(self, data):
|
|||||||
|
|
||||||
else:
|
else:
|
||||||
morsel = None
|
morsel = None
|
||||||
|
|
||||||
|
|
||||||
|
class YoutubeDLCookieJar(http.cookiejar.MozillaCookieJar):
|
||||||
|
"""
|
||||||
|
See [1] for cookie file format.
|
||||||
|
|
||||||
|
1. https://curl.haxx.se/docs/http-cookies.html
|
||||||
|
"""
|
||||||
|
_HTTPONLY_PREFIX = '#HttpOnly_'
|
||||||
|
_ENTRY_LEN = 7
|
||||||
|
_HEADER = '''# Netscape HTTP Cookie File
|
||||||
|
# This file is generated by yt-dlp. Do not edit.
|
||||||
|
|
||||||
|
'''
|
||||||
|
_CookieFileEntry = collections.namedtuple(
|
||||||
|
'CookieFileEntry',
|
||||||
|
('domain_name', 'include_subdomains', 'path', 'https_only', 'expires_at', 'name', 'value'))
|
||||||
|
|
||||||
|
def __init__(self, filename=None, *args, **kwargs):
|
||||||
|
super().__init__(None, *args, **kwargs)
|
||||||
|
if is_path_like(filename):
|
||||||
|
filename = os.fspath(filename)
|
||||||
|
self.filename = filename
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _true_or_false(cndn):
|
||||||
|
return 'TRUE' if cndn else 'FALSE'
|
||||||
|
|
||||||
|
@contextlib.contextmanager
|
||||||
|
def open(self, file, *, write=False):
|
||||||
|
if is_path_like(file):
|
||||||
|
with open(file, 'w' if write else 'r', encoding='utf-8') as f:
|
||||||
|
yield f
|
||||||
|
else:
|
||||||
|
if write:
|
||||||
|
file.truncate(0)
|
||||||
|
yield file
|
||||||
|
|
||||||
|
def _really_save(self, f, ignore_discard=False, ignore_expires=False):
|
||||||
|
now = time.time()
|
||||||
|
for cookie in self:
|
||||||
|
if (not ignore_discard and cookie.discard
|
||||||
|
or not ignore_expires and cookie.is_expired(now)):
|
||||||
|
continue
|
||||||
|
name, value = cookie.name, cookie.value
|
||||||
|
if value is None:
|
||||||
|
# cookies.txt regards 'Set-Cookie: foo' as a cookie
|
||||||
|
# with no name, whereas http.cookiejar regards it as a
|
||||||
|
# cookie with no value.
|
||||||
|
name, value = '', name
|
||||||
|
f.write('%s\n' % '\t'.join((
|
||||||
|
cookie.domain,
|
||||||
|
self._true_or_false(cookie.domain.startswith('.')),
|
||||||
|
cookie.path,
|
||||||
|
self._true_or_false(cookie.secure),
|
||||||
|
str_or_none(cookie.expires, default=''),
|
||||||
|
name, value
|
||||||
|
)))
|
||||||
|
|
||||||
|
def save(self, filename=None, *args, **kwargs):
|
||||||
|
"""
|
||||||
|
Save cookies to a file.
|
||||||
|
Code is taken from CPython 3.6
|
||||||
|
https://github.com/python/cpython/blob/8d999cbf4adea053be6dbb612b9844635c4dfb8e/Lib/http/cookiejar.py#L2091-L2117 """
|
||||||
|
|
||||||
|
if filename is None:
|
||||||
|
if self.filename is not None:
|
||||||
|
filename = self.filename
|
||||||
|
else:
|
||||||
|
raise ValueError(http.cookiejar.MISSING_FILENAME_TEXT)
|
||||||
|
|
||||||
|
# Store session cookies with `expires` set to 0 instead of an empty string
|
||||||
|
for cookie in self:
|
||||||
|
if cookie.expires is None:
|
||||||
|
cookie.expires = 0
|
||||||
|
|
||||||
|
with self.open(filename, write=True) as f:
|
||||||
|
f.write(self._HEADER)
|
||||||
|
self._really_save(f, *args, **kwargs)
|
||||||
|
|
||||||
|
def load(self, filename=None, ignore_discard=False, ignore_expires=False):
|
||||||
|
"""Load cookies from a file."""
|
||||||
|
if filename is None:
|
||||||
|
if self.filename is not None:
|
||||||
|
filename = self.filename
|
||||||
|
else:
|
||||||
|
raise ValueError(http.cookiejar.MISSING_FILENAME_TEXT)
|
||||||
|
|
||||||
|
def prepare_line(line):
|
||||||
|
if line.startswith(self._HTTPONLY_PREFIX):
|
||||||
|
line = line[len(self._HTTPONLY_PREFIX):]
|
||||||
|
# comments and empty lines are fine
|
||||||
|
if line.startswith('#') or not line.strip():
|
||||||
|
return line
|
||||||
|
cookie_list = line.split('\t')
|
||||||
|
if len(cookie_list) != self._ENTRY_LEN:
|
||||||
|
raise http.cookiejar.LoadError('invalid length %d' % len(cookie_list))
|
||||||
|
cookie = self._CookieFileEntry(*cookie_list)
|
||||||
|
if cookie.expires_at and not cookie.expires_at.isdigit():
|
||||||
|
raise http.cookiejar.LoadError('invalid expires at %s' % cookie.expires_at)
|
||||||
|
return line
|
||||||
|
|
||||||
|
cf = io.StringIO()
|
||||||
|
with self.open(filename) as f:
|
||||||
|
for line in f:
|
||||||
|
try:
|
||||||
|
cf.write(prepare_line(line))
|
||||||
|
except http.cookiejar.LoadError as e:
|
||||||
|
if f'{line.strip()} '[0] in '[{"':
|
||||||
|
raise http.cookiejar.LoadError(
|
||||||
|
'Cookies file must be Netscape formatted, not JSON. See '
|
||||||
|
'https://github.com/yt-dlp/yt-dlp/wiki/FAQ#how-do-i-pass-cookies-to-yt-dlp')
|
||||||
|
write_string(f'WARNING: skipping cookie file entry due to {e}: {line!r}\n')
|
||||||
|
continue
|
||||||
|
cf.seek(0)
|
||||||
|
self._really_load(cf, filename, ignore_discard, ignore_expires)
|
||||||
|
# Session cookies are denoted by either `expires` field set to
|
||||||
|
# an empty string or 0. MozillaCookieJar only recognizes the former
|
||||||
|
# (see [1]). So we need force the latter to be recognized as session
|
||||||
|
# cookies on our own.
|
||||||
|
# Session cookies may be important for cookies-based authentication,
|
||||||
|
# e.g. usually, when user does not check 'Remember me' check box while
|
||||||
|
# logging in on a site, some important cookies are stored as session
|
||||||
|
# cookies so that not recognizing them will result in failed login.
|
||||||
|
# 1. https://bugs.python.org/issue17164
|
||||||
|
for cookie in self:
|
||||||
|
# Treat `expires=0` cookies as session cookies
|
||||||
|
if cookie.expires == 0:
|
||||||
|
cookie.expires = None
|
||||||
|
cookie.discard = True
|
||||||
|
|
||||||
|
def get_cookie_header(self, url):
|
||||||
|
"""Generate a Cookie HTTP header for a given url"""
|
||||||
|
cookie_req = urllib.request.Request(escape_url(sanitize_url(url)))
|
||||||
|
self.add_cookie_header(cookie_req)
|
||||||
|
return cookie_req.get_header('Cookie')
|
||||||
|
|
||||||
|
def clear(self, *args, **kwargs):
|
||||||
|
with contextlib.suppress(KeyError):
|
||||||
|
return super().clear(*args, **kwargs)
|
||||||
|
38
yt_dlp/dependencies/Cryptodome.py
Normal file
38
yt_dlp/dependencies/Cryptodome.py
Normal file
@ -0,0 +1,38 @@
|
|||||||
|
from ..compat.compat_utils import passthrough_module
|
||||||
|
|
||||||
|
try:
|
||||||
|
import Cryptodome as _parent
|
||||||
|
except ImportError:
|
||||||
|
try:
|
||||||
|
import Crypto as _parent
|
||||||
|
except (ImportError, SyntaxError): # Old Crypto gives SyntaxError in newer Python
|
||||||
|
_parent = passthrough_module(__name__, 'no_Cryptodome')
|
||||||
|
__bool__ = lambda: False
|
||||||
|
|
||||||
|
del passthrough_module
|
||||||
|
|
||||||
|
__version__ = ''
|
||||||
|
AES = PKCS1_v1_5 = Blowfish = PKCS1_OAEP = SHA1 = CMAC = RSA = None
|
||||||
|
try:
|
||||||
|
if _parent.__name__ == 'Cryptodome':
|
||||||
|
from Cryptodome import __version__
|
||||||
|
from Cryptodome.Cipher import AES, PKCS1_OAEP, Blowfish, PKCS1_v1_5
|
||||||
|
from Cryptodome.Hash import CMAC, SHA1
|
||||||
|
from Cryptodome.PublicKey import RSA
|
||||||
|
elif _parent.__name__ == 'Crypto':
|
||||||
|
from Crypto import __version__
|
||||||
|
from Crypto.Cipher import AES, PKCS1_OAEP, Blowfish, PKCS1_v1_5 # noqa: F401
|
||||||
|
from Crypto.Hash import CMAC, SHA1 # noqa: F401
|
||||||
|
from Crypto.PublicKey import RSA # noqa: F401
|
||||||
|
except ImportError:
|
||||||
|
__version__ = f'broken {__version__}'.strip()
|
||||||
|
|
||||||
|
|
||||||
|
_yt_dlp__identifier = _parent.__name__
|
||||||
|
if AES and _yt_dlp__identifier == 'Crypto':
|
||||||
|
try:
|
||||||
|
# In pycrypto, mode defaults to ECB. See:
|
||||||
|
# https://www.pycryptodome.org/en/latest/src/vs_pycrypto.html#:~:text=not%20have%20ECB%20as%20default%20mode
|
||||||
|
AES.new(b'abcdefghijklmnop')
|
||||||
|
except TypeError:
|
||||||
|
_yt_dlp__identifier = 'pycrypto'
|
@ -23,24 +23,6 @@
|
|||||||
certifi = None
|
certifi = None
|
||||||
|
|
||||||
|
|
||||||
try:
|
|
||||||
from Cryptodome.Cipher import AES as Cryptodome_AES
|
|
||||||
except ImportError:
|
|
||||||
try:
|
|
||||||
from Crypto.Cipher import AES as Cryptodome_AES
|
|
||||||
except (ImportError, SyntaxError): # Old Crypto gives SyntaxError in newer Python
|
|
||||||
Cryptodome_AES = None
|
|
||||||
else:
|
|
||||||
try:
|
|
||||||
# In pycrypto, mode defaults to ECB. See:
|
|
||||||
# https://www.pycryptodome.org/en/latest/src/vs_pycrypto.html#:~:text=not%20have%20ECB%20as%20default%20mode
|
|
||||||
Cryptodome_AES.new(b'abcdefghijklmnop')
|
|
||||||
except TypeError:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
Cryptodome_AES._yt_dlp__identifier = 'pycrypto'
|
|
||||||
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
import mutagen
|
import mutagen
|
||||||
except ImportError:
|
except ImportError:
|
||||||
@ -84,12 +66,16 @@
|
|||||||
xattr._yt_dlp__identifier = 'pyxattr'
|
xattr._yt_dlp__identifier = 'pyxattr'
|
||||||
|
|
||||||
|
|
||||||
|
from . import Cryptodome
|
||||||
|
|
||||||
all_dependencies = {k: v for k, v in globals().items() if not k.startswith('_')}
|
all_dependencies = {k: v for k, v in globals().items() if not k.startswith('_')}
|
||||||
|
|
||||||
|
|
||||||
available_dependencies = {k: v for k, v in all_dependencies.items() if v}
|
available_dependencies = {k: v for k, v in all_dependencies.items() if v}
|
||||||
|
|
||||||
|
|
||||||
|
# Deprecated
|
||||||
|
Cryptodome_AES = Cryptodome.AES
|
||||||
|
|
||||||
|
|
||||||
__all__ = [
|
__all__ = [
|
||||||
'all_dependencies',
|
'all_dependencies',
|
||||||
'available_dependencies',
|
'available_dependencies',
|
@ -30,7 +30,7 @@ def get_suitable_downloader(info_dict, params={}, default=NO_DEFAULT, protocol=N
|
|||||||
from .http import HttpFD
|
from .http import HttpFD
|
||||||
from .ism import IsmFD
|
from .ism import IsmFD
|
||||||
from .mhtml import MhtmlFD
|
from .mhtml import MhtmlFD
|
||||||
from .niconico import NiconicoDmcFD
|
from .niconico import NiconicoDmcFD, NiconicoLiveFD
|
||||||
from .rtmp import RtmpFD
|
from .rtmp import RtmpFD
|
||||||
from .rtsp import RtspFD
|
from .rtsp import RtspFD
|
||||||
from .websocket import WebSocketFragmentFD
|
from .websocket import WebSocketFragmentFD
|
||||||
@ -50,6 +50,7 @@ def get_suitable_downloader(info_dict, params={}, default=NO_DEFAULT, protocol=N
|
|||||||
'ism': IsmFD,
|
'ism': IsmFD,
|
||||||
'mhtml': MhtmlFD,
|
'mhtml': MhtmlFD,
|
||||||
'niconico_dmc': NiconicoDmcFD,
|
'niconico_dmc': NiconicoDmcFD,
|
||||||
|
'niconico_live': NiconicoLiveFD,
|
||||||
'fc2_live': FC2LiveFD,
|
'fc2_live': FC2LiveFD,
|
||||||
'websocket_frag': WebSocketFragmentFD,
|
'websocket_frag': WebSocketFragmentFD,
|
||||||
'youtube_live_chat': YoutubeLiveChatFD,
|
'youtube_live_chat': YoutubeLiveChatFD,
|
||||||
|
@ -49,10 +49,10 @@ class FileDownloader:
|
|||||||
verbose: Print additional info to stdout.
|
verbose: Print additional info to stdout.
|
||||||
quiet: Do not print messages to stdout.
|
quiet: Do not print messages to stdout.
|
||||||
ratelimit: Download speed limit, in bytes/sec.
|
ratelimit: Download speed limit, in bytes/sec.
|
||||||
continuedl: Attempt to continue downloads if possible
|
|
||||||
throttledratelimit: Assume the download is being throttled below this speed (bytes/sec)
|
throttledratelimit: Assume the download is being throttled below this speed (bytes/sec)
|
||||||
retries: Number of times to retry for HTTP error 5xx
|
retries: Number of times to retry for expected network errors.
|
||||||
file_access_retries: Number of times to retry on file access error
|
Default is 0 for API, but 10 for CLI
|
||||||
|
file_access_retries: Number of times to retry on file access error (default: 3)
|
||||||
buffersize: Size of download buffer in bytes.
|
buffersize: Size of download buffer in bytes.
|
||||||
noresizebuffer: Do not automatically resize the download buffer.
|
noresizebuffer: Do not automatically resize the download buffer.
|
||||||
continuedl: Try to continue downloads if possible.
|
continuedl: Try to continue downloads if possible.
|
||||||
@ -138,17 +138,21 @@ def calc_percent(byte_counter, data_len):
|
|||||||
def format_percent(percent):
|
def format_percent(percent):
|
||||||
return ' N/A%' if percent is None else f'{percent:>5.1f}%'
|
return ' N/A%' if percent is None else f'{percent:>5.1f}%'
|
||||||
|
|
||||||
@staticmethod
|
@classmethod
|
||||||
def calc_eta(start, now, total, current):
|
def calc_eta(cls, start_or_rate, now_or_remaining, total=NO_DEFAULT, current=NO_DEFAULT):
|
||||||
|
if total is NO_DEFAULT:
|
||||||
|
rate, remaining = start_or_rate, now_or_remaining
|
||||||
|
if None in (rate, remaining):
|
||||||
|
return None
|
||||||
|
return int(float(remaining) / rate)
|
||||||
|
|
||||||
|
start, now = start_or_rate, now_or_remaining
|
||||||
if total is None:
|
if total is None:
|
||||||
return None
|
return None
|
||||||
if now is None:
|
if now is None:
|
||||||
now = time.time()
|
now = time.time()
|
||||||
dif = now - start
|
rate = cls.calc_speed(start, now, current)
|
||||||
if current == 0 or dif < 0.001: # One millisecond
|
return rate and int((float(total) - float(current)) / rate)
|
||||||
return None
|
|
||||||
rate = float(current) / dif
|
|
||||||
return int((float(total) - float(current)) / rate)
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def calc_speed(start, now, bytes):
|
def calc_speed(start, now, bytes):
|
||||||
@ -165,6 +169,12 @@ def format_speed(speed):
|
|||||||
def format_retries(retries):
|
def format_retries(retries):
|
||||||
return 'inf' if retries == float('inf') else int(retries)
|
return 'inf' if retries == float('inf') else int(retries)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def filesize_or_none(unencoded_filename):
|
||||||
|
if os.path.isfile(unencoded_filename):
|
||||||
|
return os.path.getsize(unencoded_filename)
|
||||||
|
return 0
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def best_block_size(elapsed_time, bytes):
|
def best_block_size(elapsed_time, bytes):
|
||||||
new_min = max(bytes / 2.0, 1.0)
|
new_min = max(bytes / 2.0, 1.0)
|
||||||
@ -225,7 +235,7 @@ def error_callback(err, count, retries, *, fd):
|
|||||||
sleep_func=fd.params.get('retry_sleep_functions', {}).get('file_access'))
|
sleep_func=fd.params.get('retry_sleep_functions', {}).get('file_access'))
|
||||||
|
|
||||||
def wrapper(self, func, *args, **kwargs):
|
def wrapper(self, func, *args, **kwargs):
|
||||||
for retry in RetryManager(self.params.get('file_access_retries'), error_callback, fd=self):
|
for retry in RetryManager(self.params.get('file_access_retries', 3), error_callback, fd=self):
|
||||||
try:
|
try:
|
||||||
return func(self, *args, **kwargs)
|
return func(self, *args, **kwargs)
|
||||||
except OSError as err:
|
except OSError as err:
|
||||||
@ -285,7 +295,8 @@ def _prepare_multiline_status(self, lines=1):
|
|||||||
self._multiline = BreaklineStatusPrinter(self.ydl._out_files.out, lines)
|
self._multiline = BreaklineStatusPrinter(self.ydl._out_files.out, lines)
|
||||||
else:
|
else:
|
||||||
self._multiline = MultilinePrinter(self.ydl._out_files.out, lines, not self.params.get('quiet'))
|
self._multiline = MultilinePrinter(self.ydl._out_files.out, lines, not self.params.get('quiet'))
|
||||||
self._multiline.allow_colors = self._multiline._HAVE_FULLCAP and not self.params.get('no_color')
|
self._multiline.allow_colors = self.ydl._allow_colors.out and self.ydl._allow_colors.out != 'no_color'
|
||||||
|
self._multiline._HAVE_FULLCAP = self.ydl._allow_colors.out
|
||||||
|
|
||||||
def _finish_multiline_status(self):
|
def _finish_multiline_status(self):
|
||||||
self._multiline.end()
|
self._multiline.end()
|
||||||
|
@ -23,7 +23,6 @@
|
|||||||
encodeArgument,
|
encodeArgument,
|
||||||
encodeFilename,
|
encodeFilename,
|
||||||
find_available_port,
|
find_available_port,
|
||||||
handle_youtubedl_headers,
|
|
||||||
remove_end,
|
remove_end,
|
||||||
sanitized_Request,
|
sanitized_Request,
|
||||||
traverse_obj,
|
traverse_obj,
|
||||||
@ -104,6 +103,7 @@ def supports(cls, info_dict):
|
|||||||
return all((
|
return all((
|
||||||
not info_dict.get('to_stdout') or Features.TO_STDOUT in cls.SUPPORTED_FEATURES,
|
not info_dict.get('to_stdout') or Features.TO_STDOUT in cls.SUPPORTED_FEATURES,
|
||||||
'+' not in info_dict['protocol'] or Features.MULTIPLE_FORMATS in cls.SUPPORTED_FEATURES,
|
'+' not in info_dict['protocol'] or Features.MULTIPLE_FORMATS in cls.SUPPORTED_FEATURES,
|
||||||
|
not traverse_obj(info_dict, ('hls_aes', ...), 'extra_param_to_segment_url'),
|
||||||
all(proto in cls.SUPPORTED_PROTOCOLS for proto in info_dict['protocol'].split('+')),
|
all(proto in cls.SUPPORTED_PROTOCOLS for proto in info_dict['protocol'].split('+')),
|
||||||
))
|
))
|
||||||
|
|
||||||
@ -175,7 +175,7 @@ def _call_downloader(self, tmpfilename, info_dict):
|
|||||||
return 0
|
return 0
|
||||||
|
|
||||||
def _call_process(self, cmd, info_dict):
|
def _call_process(self, cmd, info_dict):
|
||||||
return Popen.run(cmd, text=True, stderr=subprocess.PIPE)
|
return Popen.run(cmd, text=True, stderr=subprocess.PIPE if self._CAPTURE_STDERR else None)
|
||||||
|
|
||||||
|
|
||||||
class CurlFD(ExternalFD):
|
class CurlFD(ExternalFD):
|
||||||
@ -528,10 +528,9 @@ def _call_downloader(self, tmpfilename, info_dict):
|
|||||||
selected_formats = info_dict.get('requested_formats') or [info_dict]
|
selected_formats = info_dict.get('requested_formats') or [info_dict]
|
||||||
for i, fmt in enumerate(selected_formats):
|
for i, fmt in enumerate(selected_formats):
|
||||||
if fmt.get('http_headers') and re.match(r'^https?://', fmt['url']):
|
if fmt.get('http_headers') and re.match(r'^https?://', fmt['url']):
|
||||||
headers_dict = handle_youtubedl_headers(fmt['http_headers'])
|
|
||||||
# Trailing \r\n after each HTTP header is important to prevent warning from ffmpeg/avconv:
|
# Trailing \r\n after each HTTP header is important to prevent warning from ffmpeg/avconv:
|
||||||
# [http @ 00000000003d2fa0] No trailing CRLF found in HTTP header.
|
# [http @ 00000000003d2fa0] No trailing CRLF found in HTTP header.
|
||||||
args.extend(['-headers', ''.join(f'{key}: {val}\r\n' for key, val in headers_dict.items())])
|
args.extend(['-headers', ''.join(f'{key}: {val}\r\n' for key, val in fmt['http_headers'].items())])
|
||||||
|
|
||||||
if start_time:
|
if start_time:
|
||||||
args += ['-ss', str(start_time)]
|
args += ['-ss', str(start_time)]
|
||||||
|
@ -34,8 +34,8 @@ class FragmentFD(FileDownloader):
|
|||||||
|
|
||||||
Available options:
|
Available options:
|
||||||
|
|
||||||
fragment_retries: Number of times to retry a fragment for HTTP error (DASH
|
fragment_retries: Number of times to retry a fragment for HTTP error
|
||||||
and hlsnative only)
|
(DASH and hlsnative only). Default is 0 for API, but 10 for CLI
|
||||||
skip_unavailable_fragments:
|
skip_unavailable_fragments:
|
||||||
Skip unavailable fragments (DASH and hlsnative only)
|
Skip unavailable fragments (DASH and hlsnative only)
|
||||||
keep_fragments: Keep downloaded fragments on disk after downloading is
|
keep_fragments: Keep downloaded fragments on disk after downloading is
|
||||||
@ -121,6 +121,11 @@ def _download_fragment(self, ctx, frag_url, info_dict, headers=None, request_dat
|
|||||||
'request_data': request_data,
|
'request_data': request_data,
|
||||||
'ctx_id': ctx.get('ctx_id'),
|
'ctx_id': ctx.get('ctx_id'),
|
||||||
}
|
}
|
||||||
|
frag_resume_len = 0
|
||||||
|
if ctx['dl'].params.get('continuedl', True):
|
||||||
|
frag_resume_len = self.filesize_or_none(self.temp_name(fragment_filename))
|
||||||
|
fragment_info_dict['frag_resume_len'] = ctx['frag_resume_len'] = frag_resume_len
|
||||||
|
|
||||||
success, _ = ctx['dl'].download(fragment_filename, fragment_info_dict)
|
success, _ = ctx['dl'].download(fragment_filename, fragment_info_dict)
|
||||||
if not success:
|
if not success:
|
||||||
return False
|
return False
|
||||||
@ -155,9 +160,7 @@ def _append_fragment(self, ctx, frag_content):
|
|||||||
del ctx['fragment_filename_sanitized']
|
del ctx['fragment_filename_sanitized']
|
||||||
|
|
||||||
def _prepare_frag_download(self, ctx):
|
def _prepare_frag_download(self, ctx):
|
||||||
if 'live' not in ctx:
|
if not ctx.setdefault('live', False):
|
||||||
ctx['live'] = False
|
|
||||||
if not ctx['live']:
|
|
||||||
total_frags_str = '%d' % ctx['total_frags']
|
total_frags_str = '%d' % ctx['total_frags']
|
||||||
ad_frags = ctx.get('ad_frags', 0)
|
ad_frags = ctx.get('ad_frags', 0)
|
||||||
if ad_frags:
|
if ad_frags:
|
||||||
@ -170,15 +173,17 @@ def _prepare_frag_download(self, ctx):
|
|||||||
**self.params,
|
**self.params,
|
||||||
'noprogress': True,
|
'noprogress': True,
|
||||||
'test': False,
|
'test': False,
|
||||||
|
'sleep_interval': 0,
|
||||||
|
'max_sleep_interval': 0,
|
||||||
|
'sleep_interval_subtitles': 0,
|
||||||
})
|
})
|
||||||
tmpfilename = self.temp_name(ctx['filename'])
|
tmpfilename = self.temp_name(ctx['filename'])
|
||||||
open_mode = 'wb'
|
open_mode = 'wb'
|
||||||
resume_len = 0
|
|
||||||
|
|
||||||
# Establish possible resume length
|
# Establish possible resume length
|
||||||
if os.path.isfile(encodeFilename(tmpfilename)):
|
resume_len = self.filesize_or_none(tmpfilename)
|
||||||
|
if resume_len > 0:
|
||||||
open_mode = 'ab'
|
open_mode = 'ab'
|
||||||
resume_len = os.path.getsize(encodeFilename(tmpfilename))
|
|
||||||
|
|
||||||
# Should be initialized before ytdl file check
|
# Should be initialized before ytdl file check
|
||||||
ctx.update({
|
ctx.update({
|
||||||
@ -187,7 +192,9 @@ def _prepare_frag_download(self, ctx):
|
|||||||
})
|
})
|
||||||
|
|
||||||
if self.__do_ytdl_file(ctx):
|
if self.__do_ytdl_file(ctx):
|
||||||
if os.path.isfile(encodeFilename(self.ytdl_filename(ctx['filename']))):
|
ytdl_file_exists = os.path.isfile(encodeFilename(self.ytdl_filename(ctx['filename'])))
|
||||||
|
continuedl = self.params.get('continuedl', True)
|
||||||
|
if continuedl and ytdl_file_exists:
|
||||||
self._read_ytdl_file(ctx)
|
self._read_ytdl_file(ctx)
|
||||||
is_corrupt = ctx.get('ytdl_corrupt') is True
|
is_corrupt = ctx.get('ytdl_corrupt') is True
|
||||||
is_inconsistent = ctx['fragment_index'] > 0 and resume_len == 0
|
is_inconsistent = ctx['fragment_index'] > 0 and resume_len == 0
|
||||||
@ -201,7 +208,12 @@ def _prepare_frag_download(self, ctx):
|
|||||||
if 'ytdl_corrupt' in ctx:
|
if 'ytdl_corrupt' in ctx:
|
||||||
del ctx['ytdl_corrupt']
|
del ctx['ytdl_corrupt']
|
||||||
self._write_ytdl_file(ctx)
|
self._write_ytdl_file(ctx)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
|
if not continuedl:
|
||||||
|
if ytdl_file_exists:
|
||||||
|
self._read_ytdl_file(ctx)
|
||||||
|
ctx['fragment_index'] = resume_len = 0
|
||||||
self._write_ytdl_file(ctx)
|
self._write_ytdl_file(ctx)
|
||||||
assert ctx['fragment_index'] == 0
|
assert ctx['fragment_index'] == 0
|
||||||
|
|
||||||
@ -274,12 +286,10 @@ def frag_progress_hook(s):
|
|||||||
else:
|
else:
|
||||||
frag_downloaded_bytes = s['downloaded_bytes']
|
frag_downloaded_bytes = s['downloaded_bytes']
|
||||||
state['downloaded_bytes'] += frag_downloaded_bytes - ctx['prev_frag_downloaded_bytes']
|
state['downloaded_bytes'] += frag_downloaded_bytes - ctx['prev_frag_downloaded_bytes']
|
||||||
if not ctx['live']:
|
|
||||||
state['eta'] = self.calc_eta(
|
|
||||||
start, time_now, estimated_size - resume_len,
|
|
||||||
state['downloaded_bytes'] - resume_len)
|
|
||||||
ctx['speed'] = state['speed'] = self.calc_speed(
|
ctx['speed'] = state['speed'] = self.calc_speed(
|
||||||
ctx['fragment_started'], time_now, frag_downloaded_bytes)
|
ctx['fragment_started'], time_now, frag_downloaded_bytes - ctx.get('frag_resume_len', 0))
|
||||||
|
if not ctx['live']:
|
||||||
|
state['eta'] = self.calc_eta(state['speed'], estimated_size - state['downloaded_bytes'])
|
||||||
ctx['prev_frag_downloaded_bytes'] = frag_downloaded_bytes
|
ctx['prev_frag_downloaded_bytes'] = frag_downloaded_bytes
|
||||||
self._hook_progress(state, info_dict)
|
self._hook_progress(state, info_dict)
|
||||||
|
|
||||||
@ -297,7 +307,7 @@ def _finish_frag_download(self, ctx, info_dict):
|
|||||||
|
|
||||||
to_file = ctx['tmpfilename'] != '-'
|
to_file = ctx['tmpfilename'] != '-'
|
||||||
if to_file:
|
if to_file:
|
||||||
downloaded_bytes = os.path.getsize(encodeFilename(ctx['tmpfilename']))
|
downloaded_bytes = self.filesize_or_none(ctx['tmpfilename'])
|
||||||
else:
|
else:
|
||||||
downloaded_bytes = ctx['complete_frags_downloaded_bytes']
|
downloaded_bytes = ctx['complete_frags_downloaded_bytes']
|
||||||
|
|
||||||
@ -360,7 +370,8 @@ def decrypt_fragment(fragment, frag_content):
|
|||||||
if not decrypt_info or decrypt_info['METHOD'] != 'AES-128':
|
if not decrypt_info or decrypt_info['METHOD'] != 'AES-128':
|
||||||
return frag_content
|
return frag_content
|
||||||
iv = decrypt_info.get('IV') or struct.pack('>8xq', fragment['media_sequence'])
|
iv = decrypt_info.get('IV') or struct.pack('>8xq', fragment['media_sequence'])
|
||||||
decrypt_info['KEY'] = decrypt_info.get('KEY') or _get_key(info_dict.get('_decryption_key_url') or decrypt_info['URI'])
|
decrypt_info['KEY'] = (decrypt_info.get('KEY')
|
||||||
|
or _get_key(traverse_obj(info_dict, ('hls_aes', 'uri')) or decrypt_info['URI']))
|
||||||
# Don't decrypt the content in tests since the data is explicitly truncated and it's not to a valid block
|
# Don't decrypt the content in tests since the data is explicitly truncated and it's not to a valid block
|
||||||
# size (see https://github.com/ytdl-org/youtube-dl/pull/27660). Tests only care that the correct data downloaded,
|
# size (see https://github.com/ytdl-org/youtube-dl/pull/27660). Tests only care that the correct data downloaded,
|
||||||
# not what it decrypts to.
|
# not what it decrypts to.
|
||||||
@ -382,7 +393,7 @@ def download_and_append_fragments_multiple(self, *args, **kwargs):
|
|||||||
max_workers = self.params.get('concurrent_fragment_downloads', 1)
|
max_workers = self.params.get('concurrent_fragment_downloads', 1)
|
||||||
if max_progress > 1:
|
if max_progress > 1:
|
||||||
self._prepare_multiline_status(max_progress)
|
self._prepare_multiline_status(max_progress)
|
||||||
is_live = any(traverse_obj(args, (..., 2, 'is_live'), default=[]))
|
is_live = any(traverse_obj(args, (..., 2, 'is_live')))
|
||||||
|
|
||||||
def thread_func(idx, ctx, fragments, info_dict, tpe):
|
def thread_func(idx, ctx, fragments, info_dict, tpe):
|
||||||
ctx['max_progress'] = max_progress
|
ctx['max_progress'] = max_progress
|
||||||
@ -465,7 +476,8 @@ def error_callback(err, count, retries):
|
|||||||
for retry in RetryManager(self.params.get('fragment_retries'), error_callback):
|
for retry in RetryManager(self.params.get('fragment_retries'), error_callback):
|
||||||
try:
|
try:
|
||||||
ctx['fragment_count'] = fragment.get('fragment_count')
|
ctx['fragment_count'] = fragment.get('fragment_count')
|
||||||
if not self._download_fragment(ctx, fragment['url'], info_dict, headers):
|
if not self._download_fragment(
|
||||||
|
ctx, fragment['url'], info_dict, headers, info_dict.get('request_data')):
|
||||||
return
|
return
|
||||||
except (urllib.error.HTTPError, http.client.IncompleteRead) as err:
|
except (urllib.error.HTTPError, http.client.IncompleteRead) as err:
|
||||||
retry.error = err
|
retry.error = err
|
||||||
@ -495,7 +507,7 @@ def _download_fragment(fragment):
|
|||||||
download_fragment(fragment, ctx_copy)
|
download_fragment(fragment, ctx_copy)
|
||||||
return fragment, fragment['frag_index'], ctx_copy.get('fragment_filename_sanitized')
|
return fragment, fragment['frag_index'], ctx_copy.get('fragment_filename_sanitized')
|
||||||
|
|
||||||
self.report_warning('The download speed shown is only of one thread. This is a known issue and patches are welcome')
|
self.report_warning('The download speed shown is only of one thread. This is a known issue')
|
||||||
with tpe or concurrent.futures.ThreadPoolExecutor(max_workers) as pool:
|
with tpe or concurrent.futures.ThreadPoolExecutor(max_workers) as pool:
|
||||||
try:
|
try:
|
||||||
for fragment, frag_index, frag_filename in pool.map(_download_fragment, fragments):
|
for fragment, frag_index, frag_filename in pool.map(_download_fragment, fragments):
|
||||||
|
@ -7,8 +7,15 @@
|
|||||||
from .external import FFmpegFD
|
from .external import FFmpegFD
|
||||||
from .fragment import FragmentFD
|
from .fragment import FragmentFD
|
||||||
from .. import webvtt
|
from .. import webvtt
|
||||||
from ..dependencies import Cryptodome_AES
|
from ..dependencies import Cryptodome
|
||||||
from ..utils import bug_reports_message, parse_m3u8_attributes, update_url_query
|
from ..utils import (
|
||||||
|
bug_reports_message,
|
||||||
|
parse_m3u8_attributes,
|
||||||
|
remove_start,
|
||||||
|
traverse_obj,
|
||||||
|
update_url_query,
|
||||||
|
urljoin,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class HlsFD(FragmentFD):
|
class HlsFD(FragmentFD):
|
||||||
@ -63,7 +70,7 @@ def real_download(self, filename, info_dict):
|
|||||||
can_download, message = self.can_download(s, info_dict, self.params.get('allow_unplayable_formats')), None
|
can_download, message = self.can_download(s, info_dict, self.params.get('allow_unplayable_formats')), None
|
||||||
if can_download:
|
if can_download:
|
||||||
has_ffmpeg = FFmpegFD.available()
|
has_ffmpeg = FFmpegFD.available()
|
||||||
no_crypto = not Cryptodome_AES and '#EXT-X-KEY:METHOD=AES-128' in s
|
no_crypto = not Cryptodome.AES and '#EXT-X-KEY:METHOD=AES-128' in s
|
||||||
if no_crypto and has_ffmpeg:
|
if no_crypto and has_ffmpeg:
|
||||||
can_download, message = False, 'The stream has AES-128 encryption and pycryptodomex is not available'
|
can_download, message = False, 'The stream has AES-128 encryption and pycryptodomex is not available'
|
||||||
elif no_crypto:
|
elif no_crypto:
|
||||||
@ -150,6 +157,13 @@ def is_ad_fragment_end(s):
|
|||||||
i = 0
|
i = 0
|
||||||
media_sequence = 0
|
media_sequence = 0
|
||||||
decrypt_info = {'METHOD': 'NONE'}
|
decrypt_info = {'METHOD': 'NONE'}
|
||||||
|
external_aes_key = traverse_obj(info_dict, ('hls_aes', 'key'))
|
||||||
|
if external_aes_key:
|
||||||
|
external_aes_key = binascii.unhexlify(remove_start(external_aes_key, '0x'))
|
||||||
|
assert len(external_aes_key) in (16, 24, 32), 'Invalid length for HLS AES-128 key'
|
||||||
|
external_aes_iv = traverse_obj(info_dict, ('hls_aes', 'iv'))
|
||||||
|
if external_aes_iv:
|
||||||
|
external_aes_iv = binascii.unhexlify(remove_start(external_aes_iv, '0x').zfill(32))
|
||||||
byte_range = {}
|
byte_range = {}
|
||||||
discontinuity_count = 0
|
discontinuity_count = 0
|
||||||
frag_index = 0
|
frag_index = 0
|
||||||
@ -165,10 +179,7 @@ def is_ad_fragment_end(s):
|
|||||||
frag_index += 1
|
frag_index += 1
|
||||||
if frag_index <= ctx['fragment_index']:
|
if frag_index <= ctx['fragment_index']:
|
||||||
continue
|
continue
|
||||||
frag_url = (
|
frag_url = urljoin(man_url, line)
|
||||||
line
|
|
||||||
if re.match(r'^https?://', line)
|
|
||||||
else urllib.parse.urljoin(man_url, line))
|
|
||||||
if extra_query:
|
if extra_query:
|
||||||
frag_url = update_url_query(frag_url, extra_query)
|
frag_url = update_url_query(frag_url, extra_query)
|
||||||
|
|
||||||
@ -190,10 +201,7 @@ def is_ad_fragment_end(s):
|
|||||||
return False
|
return False
|
||||||
frag_index += 1
|
frag_index += 1
|
||||||
map_info = parse_m3u8_attributes(line[11:])
|
map_info = parse_m3u8_attributes(line[11:])
|
||||||
frag_url = (
|
frag_url = urljoin(man_url, map_info.get('URI'))
|
||||||
map_info.get('URI')
|
|
||||||
if re.match(r'^https?://', map_info.get('URI'))
|
|
||||||
else urllib.parse.urljoin(man_url, map_info.get('URI')))
|
|
||||||
if extra_query:
|
if extra_query:
|
||||||
frag_url = update_url_query(frag_url, extra_query)
|
frag_url = update_url_query(frag_url, extra_query)
|
||||||
|
|
||||||
@ -218,15 +226,18 @@ def is_ad_fragment_end(s):
|
|||||||
decrypt_url = decrypt_info.get('URI')
|
decrypt_url = decrypt_info.get('URI')
|
||||||
decrypt_info = parse_m3u8_attributes(line[11:])
|
decrypt_info = parse_m3u8_attributes(line[11:])
|
||||||
if decrypt_info['METHOD'] == 'AES-128':
|
if decrypt_info['METHOD'] == 'AES-128':
|
||||||
if 'IV' in decrypt_info:
|
if external_aes_iv:
|
||||||
|
decrypt_info['IV'] = external_aes_iv
|
||||||
|
elif 'IV' in decrypt_info:
|
||||||
decrypt_info['IV'] = binascii.unhexlify(decrypt_info['IV'][2:].zfill(32))
|
decrypt_info['IV'] = binascii.unhexlify(decrypt_info['IV'][2:].zfill(32))
|
||||||
if not re.match(r'^https?://', decrypt_info['URI']):
|
if external_aes_key:
|
||||||
decrypt_info['URI'] = urllib.parse.urljoin(
|
decrypt_info['KEY'] = external_aes_key
|
||||||
man_url, decrypt_info['URI'])
|
else:
|
||||||
if extra_query:
|
decrypt_info['URI'] = urljoin(man_url, decrypt_info['URI'])
|
||||||
decrypt_info['URI'] = update_url_query(decrypt_info['URI'], extra_query)
|
if extra_query:
|
||||||
if decrypt_url != decrypt_info['URI']:
|
decrypt_info['URI'] = update_url_query(decrypt_info['URI'], extra_query)
|
||||||
decrypt_info['KEY'] = None
|
if decrypt_url != decrypt_info['URI']:
|
||||||
|
decrypt_info['KEY'] = None
|
||||||
|
|
||||||
elif line.startswith('#EXT-X-MEDIA-SEQUENCE'):
|
elif line.startswith('#EXT-X-MEDIA-SEQUENCE'):
|
||||||
media_sequence = int(line[22:])
|
media_sequence = int(line[22:])
|
||||||
|
@ -45,8 +45,8 @@ class DownloadContext(dict):
|
|||||||
ctx.tmpfilename = self.temp_name(filename)
|
ctx.tmpfilename = self.temp_name(filename)
|
||||||
ctx.stream = None
|
ctx.stream = None
|
||||||
|
|
||||||
# Do not include the Accept-Encoding header
|
# Disable compression
|
||||||
headers = {'Youtubedl-no-compression': 'True'}
|
headers = {'Accept-Encoding': 'identity'}
|
||||||
add_headers = info_dict.get('http_headers')
|
add_headers = info_dict.get('http_headers')
|
||||||
if add_headers:
|
if add_headers:
|
||||||
headers.update(add_headers)
|
headers.update(add_headers)
|
||||||
@ -150,7 +150,8 @@ def establish_connection():
|
|||||||
# Content-Range is either not present or invalid. Assuming remote webserver is
|
# Content-Range is either not present or invalid. Assuming remote webserver is
|
||||||
# trying to send the whole file, resume is not possible, so wiping the local file
|
# trying to send the whole file, resume is not possible, so wiping the local file
|
||||||
# and performing entire redownload
|
# and performing entire redownload
|
||||||
self.report_unable_to_resume()
|
elif range_start > 0:
|
||||||
|
self.report_unable_to_resume()
|
||||||
ctx.resume_len = 0
|
ctx.resume_len = 0
|
||||||
ctx.open_mode = 'wb'
|
ctx.open_mode = 'wb'
|
||||||
ctx.data_len = ctx.content_len = int_or_none(ctx.data.info().get('Content-length', None))
|
ctx.data_len = ctx.content_len = int_or_none(ctx.data.info().get('Content-length', None))
|
||||||
@ -211,7 +212,12 @@ def close_stream():
|
|||||||
ctx.stream = None
|
ctx.stream = None
|
||||||
|
|
||||||
def download():
|
def download():
|
||||||
data_len = ctx.data.info().get('Content-length', None)
|
data_len = ctx.data.info().get('Content-length')
|
||||||
|
|
||||||
|
if ctx.data.info().get('Content-encoding'):
|
||||||
|
# Content-encoding is present, Content-length is not reliable anymore as we are
|
||||||
|
# doing auto decompression. (See: https://github.com/yt-dlp/yt-dlp/pull/6176)
|
||||||
|
data_len = None
|
||||||
|
|
||||||
# Range HTTP header may be ignored/unsupported by a webserver
|
# Range HTTP header may be ignored/unsupported by a webserver
|
||||||
# (e.g. extractor/scivee.py, extractor/bambuser.py).
|
# (e.g. extractor/scivee.py, extractor/bambuser.py).
|
||||||
|
@ -1,8 +1,17 @@
|
|||||||
|
import json
|
||||||
import threading
|
import threading
|
||||||
|
import time
|
||||||
|
|
||||||
from . import get_suitable_downloader
|
from . import get_suitable_downloader
|
||||||
from .common import FileDownloader
|
from .common import FileDownloader
|
||||||
from ..utils import sanitized_Request
|
from .external import FFmpegFD
|
||||||
|
from ..utils import (
|
||||||
|
DownloadError,
|
||||||
|
WebSocketsWrapper,
|
||||||
|
sanitized_Request,
|
||||||
|
str_or_none,
|
||||||
|
try_get,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class NiconicoDmcFD(FileDownloader):
|
class NiconicoDmcFD(FileDownloader):
|
||||||
@ -50,3 +59,93 @@ def heartbeat():
|
|||||||
timer[0].cancel()
|
timer[0].cancel()
|
||||||
download_complete = True
|
download_complete = True
|
||||||
return success
|
return success
|
||||||
|
|
||||||
|
|
||||||
|
class NiconicoLiveFD(FileDownloader):
|
||||||
|
""" Downloads niconico live without being stopped """
|
||||||
|
|
||||||
|
def real_download(self, filename, info_dict):
|
||||||
|
video_id = info_dict['video_id']
|
||||||
|
ws_url = info_dict['url']
|
||||||
|
ws_extractor = info_dict['ws']
|
||||||
|
ws_origin_host = info_dict['origin']
|
||||||
|
cookies = info_dict.get('cookies')
|
||||||
|
live_quality = info_dict.get('live_quality', 'high')
|
||||||
|
live_latency = info_dict.get('live_latency', 'high')
|
||||||
|
dl = FFmpegFD(self.ydl, self.params or {})
|
||||||
|
|
||||||
|
new_info_dict = info_dict.copy()
|
||||||
|
new_info_dict.update({
|
||||||
|
'protocol': 'm3u8',
|
||||||
|
})
|
||||||
|
|
||||||
|
def communicate_ws(reconnect):
|
||||||
|
if reconnect:
|
||||||
|
ws = WebSocketsWrapper(ws_url, {
|
||||||
|
'Cookies': str_or_none(cookies) or '',
|
||||||
|
'Origin': f'https://{ws_origin_host}',
|
||||||
|
'Accept': '*/*',
|
||||||
|
'User-Agent': self.params['http_headers']['User-Agent'],
|
||||||
|
})
|
||||||
|
if self.ydl.params.get('verbose', False):
|
||||||
|
self.to_screen('[debug] Sending startWatching request')
|
||||||
|
ws.send(json.dumps({
|
||||||
|
'type': 'startWatching',
|
||||||
|
'data': {
|
||||||
|
'stream': {
|
||||||
|
'quality': live_quality,
|
||||||
|
'protocol': 'hls+fmp4',
|
||||||
|
'latency': live_latency,
|
||||||
|
'chasePlay': False
|
||||||
|
},
|
||||||
|
'room': {
|
||||||
|
'protocol': 'webSocket',
|
||||||
|
'commentable': True
|
||||||
|
},
|
||||||
|
'reconnect': True,
|
||||||
|
}
|
||||||
|
}))
|
||||||
|
else:
|
||||||
|
ws = ws_extractor
|
||||||
|
with ws:
|
||||||
|
while True:
|
||||||
|
recv = ws.recv()
|
||||||
|
if not recv:
|
||||||
|
continue
|
||||||
|
data = json.loads(recv)
|
||||||
|
if not data or not isinstance(data, dict):
|
||||||
|
continue
|
||||||
|
if data.get('type') == 'ping':
|
||||||
|
# pong back
|
||||||
|
ws.send(r'{"type":"pong"}')
|
||||||
|
ws.send(r'{"type":"keepSeat"}')
|
||||||
|
elif data.get('type') == 'disconnect':
|
||||||
|
self.write_debug(data)
|
||||||
|
return True
|
||||||
|
elif data.get('type') == 'error':
|
||||||
|
self.write_debug(data)
|
||||||
|
message = try_get(data, lambda x: x['body']['code'], str) or recv
|
||||||
|
return DownloadError(message)
|
||||||
|
elif self.ydl.params.get('verbose', False):
|
||||||
|
if len(recv) > 100:
|
||||||
|
recv = recv[:100] + '...'
|
||||||
|
self.to_screen('[debug] Server said: %s' % recv)
|
||||||
|
|
||||||
|
def ws_main():
|
||||||
|
reconnect = False
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
ret = communicate_ws(reconnect)
|
||||||
|
if ret is True:
|
||||||
|
return
|
||||||
|
except BaseException as e:
|
||||||
|
self.to_screen('[%s] %s: Connection error occured, reconnecting after 10 seconds: %s' % ('niconico:live', video_id, str_or_none(e)))
|
||||||
|
time.sleep(10)
|
||||||
|
continue
|
||||||
|
finally:
|
||||||
|
reconnect = True
|
||||||
|
|
||||||
|
thread = threading.Thread(target=ws_main, daemon=True)
|
||||||
|
thread.start()
|
||||||
|
|
||||||
|
return dl.download(filename, new_info_dict)
|
||||||
|
@ -21,7 +21,8 @@
|
|||||||
YoutubeYtBeIE,
|
YoutubeYtBeIE,
|
||||||
YoutubeYtUserIE,
|
YoutubeYtUserIE,
|
||||||
YoutubeWatchLaterIE,
|
YoutubeWatchLaterIE,
|
||||||
YoutubeShortsAudioPivotIE
|
YoutubeShortsAudioPivotIE,
|
||||||
|
YoutubeConsentRedirectIE,
|
||||||
)
|
)
|
||||||
|
|
||||||
from .abc import (
|
from .abc import (
|
||||||
@ -101,6 +102,7 @@
|
|||||||
AmericasTestKitchenIE,
|
AmericasTestKitchenIE,
|
||||||
AmericasTestKitchenSeasonIE,
|
AmericasTestKitchenSeasonIE,
|
||||||
)
|
)
|
||||||
|
from .anchorfm import AnchorFMEpisodeIE
|
||||||
from .angel import AngelIE
|
from .angel import AngelIE
|
||||||
from .anvato import AnvatoIE
|
from .anvato import AnvatoIE
|
||||||
from .aol import AolIE
|
from .aol import AolIE
|
||||||
@ -121,6 +123,7 @@
|
|||||||
from .archiveorg import (
|
from .archiveorg import (
|
||||||
ArchiveOrgIE,
|
ArchiveOrgIE,
|
||||||
YoutubeWebArchiveIE,
|
YoutubeWebArchiveIE,
|
||||||
|
VLiveWebArchiveIE,
|
||||||
)
|
)
|
||||||
from .arcpublishing import ArcPublishingIE
|
from .arcpublishing import ArcPublishingIE
|
||||||
from .arkena import ArkenaIE
|
from .arkena import ArkenaIE
|
||||||
@ -201,7 +204,11 @@
|
|||||||
BFMTVLiveIE,
|
BFMTVLiveIE,
|
||||||
BFMTVArticleIE,
|
BFMTVArticleIE,
|
||||||
)
|
)
|
||||||
from .bibeltv import BibelTVIE
|
from .bibeltv import (
|
||||||
|
BibelTVLiveIE,
|
||||||
|
BibelTVSeriesIE,
|
||||||
|
BibelTVVideoIE,
|
||||||
|
)
|
||||||
from .bigflix import BigflixIE
|
from .bigflix import BigflixIE
|
||||||
from .bigo import BigoIE
|
from .bigo import BigoIE
|
||||||
from .bild import BildIE
|
from .bild import BildIE
|
||||||
@ -236,19 +243,28 @@
|
|||||||
BleacherReportIE,
|
BleacherReportIE,
|
||||||
BleacherReportCMSIE,
|
BleacherReportCMSIE,
|
||||||
)
|
)
|
||||||
|
from .blerp import BlerpIE
|
||||||
from .blogger import BloggerIE
|
from .blogger import BloggerIE
|
||||||
from .bloomberg import BloombergIE
|
from .bloomberg import BloombergIE
|
||||||
from .bokecc import BokeCCIE
|
from .bokecc import BokeCCIE
|
||||||
from .bongacams import BongaCamsIE
|
from .bongacams import BongaCamsIE
|
||||||
from .bostonglobe import BostonGlobeIE
|
from .bostonglobe import BostonGlobeIE
|
||||||
from .box import BoxIE
|
from .box import BoxIE
|
||||||
from .booyah import BooyahClipsIE
|
from .boxcast import BoxCastVideoIE
|
||||||
from .bpb import BpbIE
|
from .bpb import BpbIE
|
||||||
from .br import (
|
from .br import (
|
||||||
BRIE,
|
BRIE,
|
||||||
BRMediathekIE,
|
BRMediathekIE,
|
||||||
)
|
)
|
||||||
from .bravotv import BravoTVIE
|
from .bravotv import BravoTVIE
|
||||||
|
from .brainpop import (
|
||||||
|
BrainPOPIE,
|
||||||
|
BrainPOPJrIE,
|
||||||
|
BrainPOPELLIE,
|
||||||
|
BrainPOPEspIE,
|
||||||
|
BrainPOPFrIE,
|
||||||
|
BrainPOPIlIE,
|
||||||
|
)
|
||||||
from .breakcom import BreakIE
|
from .breakcom import BreakIE
|
||||||
from .breitbart import BreitBartIE
|
from .breitbart import BreitBartIE
|
||||||
from .brightcove import (
|
from .brightcove import (
|
||||||
@ -268,6 +284,10 @@
|
|||||||
CamdemyIE,
|
CamdemyIE,
|
||||||
CamdemyFolderIE
|
CamdemyFolderIE
|
||||||
)
|
)
|
||||||
|
from .camfm import (
|
||||||
|
CamFMEpisodeIE,
|
||||||
|
CamFMShowIE
|
||||||
|
)
|
||||||
from .cammodels import CamModelsIE
|
from .cammodels import CamModelsIE
|
||||||
from .camsoda import CamsodaIE
|
from .camsoda import CamsodaIE
|
||||||
from .camtasia import CamtasiaEmbedIE
|
from .camtasia import CamtasiaEmbedIE
|
||||||
@ -275,12 +295,6 @@
|
|||||||
from .canalalpha import CanalAlphaIE
|
from .canalalpha import CanalAlphaIE
|
||||||
from .canalplus import CanalplusIE
|
from .canalplus import CanalplusIE
|
||||||
from .canalc2 import Canalc2IE
|
from .canalc2 import Canalc2IE
|
||||||
from .canvas import (
|
|
||||||
CanvasIE,
|
|
||||||
CanvasEenIE,
|
|
||||||
VrtNUIE,
|
|
||||||
DagelijkseKostIE,
|
|
||||||
)
|
|
||||||
from .carambatv import (
|
from .carambatv import (
|
||||||
CarambaTVIE,
|
CarambaTVIE,
|
||||||
CarambaTVPageIE,
|
CarambaTVPageIE,
|
||||||
@ -293,15 +307,18 @@
|
|||||||
CBCGemPlaylistIE,
|
CBCGemPlaylistIE,
|
||||||
CBCGemLiveIE,
|
CBCGemLiveIE,
|
||||||
)
|
)
|
||||||
from .cbs import CBSIE
|
from .cbs import (
|
||||||
from .cbslocal import (
|
CBSIE,
|
||||||
CBSLocalIE,
|
ParamountPressExpressIE,
|
||||||
CBSLocalArticleIE,
|
|
||||||
)
|
)
|
||||||
from .cbsinteractive import CBSInteractiveIE
|
from .cbsinteractive import CBSInteractiveIE
|
||||||
from .cbsnews import (
|
from .cbsnews import (
|
||||||
CBSNewsEmbedIE,
|
CBSNewsEmbedIE,
|
||||||
CBSNewsIE,
|
CBSNewsIE,
|
||||||
|
CBSLocalIE,
|
||||||
|
CBSLocalArticleIE,
|
||||||
|
CBSLocalLiveIE,
|
||||||
|
CBSNewsLiveIE,
|
||||||
CBSNewsLiveVideoIE,
|
CBSNewsLiveVideoIE,
|
||||||
)
|
)
|
||||||
from .cbssports import (
|
from .cbssports import (
|
||||||
@ -340,6 +357,7 @@
|
|||||||
)
|
)
|
||||||
from .ciscowebex import CiscoWebexIE
|
from .ciscowebex import CiscoWebexIE
|
||||||
from .cjsw import CJSWIE
|
from .cjsw import CJSWIE
|
||||||
|
from .clipchamp import ClipchampIE
|
||||||
from .cliphunter import CliphunterIE
|
from .cliphunter import CliphunterIE
|
||||||
from .clippit import ClippitIE
|
from .clippit import ClippitIE
|
||||||
from .cliprs import ClipRsIE
|
from .cliprs import ClipRsIE
|
||||||
@ -387,9 +405,12 @@
|
|||||||
CrowdBunkerIE,
|
CrowdBunkerIE,
|
||||||
CrowdBunkerChannelIE,
|
CrowdBunkerChannelIE,
|
||||||
)
|
)
|
||||||
|
from .crtvg import CrtvgIE
|
||||||
from .crunchyroll import (
|
from .crunchyroll import (
|
||||||
CrunchyrollBetaIE,
|
CrunchyrollBetaIE,
|
||||||
CrunchyrollBetaShowIE,
|
CrunchyrollBetaShowIE,
|
||||||
|
CrunchyrollMusicIE,
|
||||||
|
CrunchyrollArtistIE,
|
||||||
)
|
)
|
||||||
from .cspan import CSpanIE, CSpanCongressIE
|
from .cspan import CSpanIE, CSpanCongressIE
|
||||||
from .ctsnews import CtsNewsIE
|
from .ctsnews import CtsNewsIE
|
||||||
@ -406,6 +427,10 @@
|
|||||||
CybraryIE,
|
CybraryIE,
|
||||||
CybraryCourseIE
|
CybraryCourseIE
|
||||||
)
|
)
|
||||||
|
from .dacast import (
|
||||||
|
DacastVODIE,
|
||||||
|
DacastPlaylistIE,
|
||||||
|
)
|
||||||
from .daftsex import DaftsexIE
|
from .daftsex import DaftsexIE
|
||||||
from .dailymail import DailyMailIE
|
from .dailymail import DailyMailIE
|
||||||
from .dailymotion import (
|
from .dailymotion import (
|
||||||
@ -436,6 +461,10 @@
|
|||||||
)
|
)
|
||||||
from .democracynow import DemocracynowIE
|
from .democracynow import DemocracynowIE
|
||||||
from .detik import DetikEmbedIE
|
from .detik import DetikEmbedIE
|
||||||
|
from .dlf import (
|
||||||
|
DLFIE,
|
||||||
|
DLFCorpusIE,
|
||||||
|
)
|
||||||
from .dfb import DFBIE
|
from .dfb import DFBIE
|
||||||
from .dhm import DHMIE
|
from .dhm import DHMIE
|
||||||
from .digg import DiggIE
|
from .digg import DiggIE
|
||||||
@ -468,6 +497,7 @@
|
|||||||
DiscoveryPlusItalyIE,
|
DiscoveryPlusItalyIE,
|
||||||
DiscoveryPlusItalyShowIE,
|
DiscoveryPlusItalyShowIE,
|
||||||
DiscoveryPlusIndiaShowIE,
|
DiscoveryPlusIndiaShowIE,
|
||||||
|
GlobalCyclingNetworkPlusIE,
|
||||||
)
|
)
|
||||||
from .dreisat import DreiSatIE
|
from .dreisat import DreiSatIE
|
||||||
from .drbonanza import DRBonanzaIE
|
from .drbonanza import DRBonanzaIE
|
||||||
@ -491,6 +521,7 @@
|
|||||||
DeuxMNewsIE
|
DeuxMNewsIE
|
||||||
)
|
)
|
||||||
from .digitalconcerthall import DigitalConcertHallIE
|
from .digitalconcerthall import DigitalConcertHallIE
|
||||||
|
from .discogs import DiscogsReleasePlaylistIE
|
||||||
from .discovery import DiscoveryIE
|
from .discovery import DiscoveryIE
|
||||||
from .disney import DisneyIE
|
from .disney import DisneyIE
|
||||||
from .dispeak import DigitallySpeakingIE
|
from .dispeak import DigitallySpeakingIE
|
||||||
@ -505,6 +536,7 @@
|
|||||||
)
|
)
|
||||||
from .eagleplatform import EaglePlatformIE, ClipYouEmbedIE
|
from .eagleplatform import EaglePlatformIE, ClipYouEmbedIE
|
||||||
from .ebaumsworld import EbaumsWorldIE
|
from .ebaumsworld import EbaumsWorldIE
|
||||||
|
from .ebay import EbayIE
|
||||||
from .echomsk import EchoMskIE
|
from .echomsk import EchoMskIE
|
||||||
from .egghead import (
|
from .egghead import (
|
||||||
EggheadCourseIE,
|
EggheadCourseIE,
|
||||||
@ -514,6 +546,7 @@
|
|||||||
from .eighttracks import EightTracksIE
|
from .eighttracks import EightTracksIE
|
||||||
from .einthusan import EinthusanIE
|
from .einthusan import EinthusanIE
|
||||||
from .eitb import EitbIE
|
from .eitb import EitbIE
|
||||||
|
from .elevensports import ElevenSportsIE
|
||||||
from .ellentube import (
|
from .ellentube import (
|
||||||
EllenTubeIE,
|
EllenTubeIE,
|
||||||
EllenTubeVideoIE,
|
EllenTubeVideoIE,
|
||||||
@ -547,6 +580,7 @@
|
|||||||
ESPNCricInfoIE,
|
ESPNCricInfoIE,
|
||||||
)
|
)
|
||||||
from .esri import EsriVideoIE
|
from .esri import EsriVideoIE
|
||||||
|
from .ettutv import EttuTvIE
|
||||||
from .europa import EuropaIE, EuroParlWebstreamIE
|
from .europa import EuropaIE, EuroParlWebstreamIE
|
||||||
from .europeantour import EuropeanTourIE
|
from .europeantour import EuropeanTourIE
|
||||||
from .eurosport import EurosportIE
|
from .eurosport import EurosportIE
|
||||||
@ -633,6 +667,7 @@
|
|||||||
FunimationShowIE,
|
FunimationShowIE,
|
||||||
)
|
)
|
||||||
from .funk import FunkIE
|
from .funk import FunkIE
|
||||||
|
from .funker530 import Funker530IE
|
||||||
from .fusion import FusionIE
|
from .fusion import FusionIE
|
||||||
from .fuyintv import FuyinTVIE
|
from .fuyintv import FuyinTVIE
|
||||||
from .gab import (
|
from .gab import (
|
||||||
@ -668,10 +703,18 @@
|
|||||||
from .giantbomb import GiantBombIE
|
from .giantbomb import GiantBombIE
|
||||||
from .giga import GigaIE
|
from .giga import GigaIE
|
||||||
from .glide import GlideIE
|
from .glide import GlideIE
|
||||||
|
from .globalplayer import (
|
||||||
|
GlobalPlayerLiveIE,
|
||||||
|
GlobalPlayerLivePlaylistIE,
|
||||||
|
GlobalPlayerAudioIE,
|
||||||
|
GlobalPlayerAudioEpisodeIE,
|
||||||
|
GlobalPlayerVideoIE
|
||||||
|
)
|
||||||
from .globo import (
|
from .globo import (
|
||||||
GloboIE,
|
GloboIE,
|
||||||
GloboArticleIE,
|
GloboArticleIE,
|
||||||
)
|
)
|
||||||
|
from .gmanetwork import GMANetworkVideoIE
|
||||||
from .go import GoIE
|
from .go import GoIE
|
||||||
from .godtube import GodTubeIE
|
from .godtube import GodTubeIE
|
||||||
from .gofile import GofileIE
|
from .gofile import GofileIE
|
||||||
@ -703,13 +746,16 @@
|
|||||||
from .heise import HeiseIE
|
from .heise import HeiseIE
|
||||||
from .hellporno import HellPornoIE
|
from .hellporno import HellPornoIE
|
||||||
from .helsinki import HelsinkiIE
|
from .helsinki import HelsinkiIE
|
||||||
from .hentaistigma import HentaiStigmaIE
|
|
||||||
from .hgtv import HGTVComShowIE
|
from .hgtv import HGTVComShowIE
|
||||||
from .hketv import HKETVIE
|
from .hketv import HKETVIE
|
||||||
from .hidive import HiDiveIE
|
from .hidive import HiDiveIE
|
||||||
from .historicfilms import HistoricFilmsIE
|
from .historicfilms import HistoricFilmsIE
|
||||||
from .hitbox import HitboxIE, HitboxLiveIE
|
from .hitbox import HitboxIE, HitboxLiveIE
|
||||||
from .hitrecord import HitRecordIE
|
from .hitrecord import HitRecordIE
|
||||||
|
from .hollywoodreporter import (
|
||||||
|
HollywoodReporterIE,
|
||||||
|
HollywoodReporterPlaylistIE,
|
||||||
|
)
|
||||||
from .holodex import HolodexIE
|
from .holodex import HolodexIE
|
||||||
from .hotnewhiphop import HotNewHipHopIE
|
from .hotnewhiphop import HotNewHipHopIE
|
||||||
from .hotstar import (
|
from .hotstar import (
|
||||||
@ -721,6 +767,7 @@
|
|||||||
)
|
)
|
||||||
from .howcast import HowcastIE
|
from .howcast import HowcastIE
|
||||||
from .howstuffworks import HowStuffWorksIE
|
from .howstuffworks import HowStuffWorksIE
|
||||||
|
from .hrefli import HrefLiRedirectIE
|
||||||
from .hrfensehen import HRFernsehenIE
|
from .hrfensehen import HRFernsehenIE
|
||||||
from .hrti import (
|
from .hrti import (
|
||||||
HRTiIE,
|
HRTiIE,
|
||||||
@ -743,12 +790,14 @@
|
|||||||
HungamaAlbumPlaylistIE,
|
HungamaAlbumPlaylistIE,
|
||||||
)
|
)
|
||||||
from .hypem import HypemIE
|
from .hypem import HypemIE
|
||||||
|
from .hypergryph import MonsterSirenHypergryphMusicIE
|
||||||
from .hytale import HytaleIE
|
from .hytale import HytaleIE
|
||||||
from .icareus import IcareusIE
|
from .icareus import IcareusIE
|
||||||
from .ichinanalive import (
|
from .ichinanalive import (
|
||||||
IchinanaLiveIE,
|
IchinanaLiveIE,
|
||||||
IchinanaLiveClipIE,
|
IchinanaLiveClipIE,
|
||||||
)
|
)
|
||||||
|
from .idolplus import IdolPlusIE
|
||||||
from .ign import (
|
from .ign import (
|
||||||
IGNIE,
|
IGNIE,
|
||||||
IGNVideoIE,
|
IGNVideoIE,
|
||||||
@ -833,6 +882,7 @@
|
|||||||
from .jeuxvideo import JeuxVideoIE
|
from .jeuxvideo import JeuxVideoIE
|
||||||
from .jove import JoveIE
|
from .jove import JoveIE
|
||||||
from .joj import JojIE
|
from .joj import JojIE
|
||||||
|
from .jstream import JStreamIE
|
||||||
from .jwplatform import JWPlatformIE
|
from .jwplatform import JWPlatformIE
|
||||||
from .kakao import KakaoIE
|
from .kakao import KakaoIE
|
||||||
from .kaltura import KalturaIE
|
from .kaltura import KalturaIE
|
||||||
@ -842,7 +892,6 @@
|
|||||||
from .karrierevideos import KarriereVideosIE
|
from .karrierevideos import KarriereVideosIE
|
||||||
from .keezmovies import KeezMoviesIE
|
from .keezmovies import KeezMoviesIE
|
||||||
from .kelbyone import KelbyOneIE
|
from .kelbyone import KelbyOneIE
|
||||||
from .ketnet import KetnetIE
|
|
||||||
from .khanacademy import (
|
from .khanacademy import (
|
||||||
KhanAcademyIE,
|
KhanAcademyIE,
|
||||||
KhanAcademyUnitIE,
|
KhanAcademyUnitIE,
|
||||||
@ -855,6 +904,7 @@
|
|||||||
from .kickstarter import KickStarterIE
|
from .kickstarter import KickStarterIE
|
||||||
from .kinja import KinjaEmbedIE
|
from .kinja import KinjaEmbedIE
|
||||||
from .kinopoisk import KinoPoiskIE
|
from .kinopoisk import KinoPoiskIE
|
||||||
|
from .kommunetv import KommunetvIE
|
||||||
from .kompas import KompasVideoIE
|
from .kompas import KompasVideoIE
|
||||||
from .konserthusetplay import KonserthusetPlayIE
|
from .konserthusetplay import KonserthusetPlayIE
|
||||||
from .koo import KooIE
|
from .koo import KooIE
|
||||||
@ -906,6 +956,10 @@
|
|||||||
LePlaylistIE,
|
LePlaylistIE,
|
||||||
LetvCloudIE,
|
LetvCloudIE,
|
||||||
)
|
)
|
||||||
|
from .lefigaro import (
|
||||||
|
LeFigaroVideoEmbedIE,
|
||||||
|
LeFigaroVideoSectionIE,
|
||||||
|
)
|
||||||
from .lego import LEGOIE
|
from .lego import LEGOIE
|
||||||
from .lemonde import LemondeIE
|
from .lemonde import LemondeIE
|
||||||
from .lenta import LentaIE
|
from .lenta import LentaIE
|
||||||
@ -924,10 +978,6 @@
|
|||||||
LimelightChannelIE,
|
LimelightChannelIE,
|
||||||
LimelightChannelListIE,
|
LimelightChannelListIE,
|
||||||
)
|
)
|
||||||
from .line import (
|
|
||||||
LineLiveIE,
|
|
||||||
LineLiveChannelIE,
|
|
||||||
)
|
|
||||||
from .linkedin import (
|
from .linkedin import (
|
||||||
LinkedInIE,
|
LinkedInIE,
|
||||||
LinkedInLearningIE,
|
LinkedInLearningIE,
|
||||||
@ -954,6 +1004,9 @@
|
|||||||
LRTVODIE,
|
LRTVODIE,
|
||||||
LRTStreamIE
|
LRTStreamIE
|
||||||
)
|
)
|
||||||
|
from .lumni import (
|
||||||
|
LumniIE
|
||||||
|
)
|
||||||
from .lynda import (
|
from .lynda import (
|
||||||
LyndaIE,
|
LyndaIE,
|
||||||
LyndaCourseIE
|
LyndaCourseIE
|
||||||
@ -1067,7 +1120,8 @@
|
|||||||
from .morningstar import MorningstarIE
|
from .morningstar import MorningstarIE
|
||||||
from .motherless import (
|
from .motherless import (
|
||||||
MotherlessIE,
|
MotherlessIE,
|
||||||
MotherlessGroupIE
|
MotherlessGroupIE,
|
||||||
|
MotherlessGalleryIE,
|
||||||
)
|
)
|
||||||
from .motorsport import MotorsportIE
|
from .motorsport import MotorsportIE
|
||||||
from .movieclips import MovieClipsIE
|
from .movieclips import MovieClipsIE
|
||||||
@ -1108,6 +1162,7 @@
|
|||||||
)
|
)
|
||||||
from .myvideoge import MyVideoGeIE
|
from .myvideoge import MyVideoGeIE
|
||||||
from .myvidster import MyVidsterIE
|
from .myvidster import MyVidsterIE
|
||||||
|
from .mzaalo import MzaaloIE
|
||||||
from .n1 import (
|
from .n1 import (
|
||||||
N1InfoAssetIE,
|
N1InfoAssetIE,
|
||||||
N1InfoIIE,
|
N1InfoIIE,
|
||||||
@ -1156,6 +1211,7 @@
|
|||||||
NebulaSubscriptionsIE,
|
NebulaSubscriptionsIE,
|
||||||
NebulaChannelIE,
|
NebulaChannelIE,
|
||||||
)
|
)
|
||||||
|
from .nekohacker import NekoHackerIE
|
||||||
from .nerdcubed import NerdCubedFeedIE
|
from .nerdcubed import NerdCubedFeedIE
|
||||||
from .netzkino import NetzkinoIE
|
from .netzkino import NetzkinoIE
|
||||||
from .neteasemusic import (
|
from .neteasemusic import (
|
||||||
@ -1195,6 +1251,8 @@
|
|||||||
from .nfl import (
|
from .nfl import (
|
||||||
NFLIE,
|
NFLIE,
|
||||||
NFLArticleIE,
|
NFLArticleIE,
|
||||||
|
NFLPlusEpisodeIE,
|
||||||
|
NFLPlusReplayIE,
|
||||||
)
|
)
|
||||||
from .nhk import (
|
from .nhk import (
|
||||||
NhkVodIE,
|
NhkVodIE,
|
||||||
@ -1202,6 +1260,9 @@
|
|||||||
NhkForSchoolBangumiIE,
|
NhkForSchoolBangumiIE,
|
||||||
NhkForSchoolSubjectIE,
|
NhkForSchoolSubjectIE,
|
||||||
NhkForSchoolProgramListIE,
|
NhkForSchoolProgramListIE,
|
||||||
|
NhkRadioNewsPageIE,
|
||||||
|
NhkRadiruIE,
|
||||||
|
NhkRadiruLiveIE,
|
||||||
)
|
)
|
||||||
from .nhl import NHLIE
|
from .nhl import NHLIE
|
||||||
from .nick import (
|
from .nick import (
|
||||||
@ -1221,6 +1282,7 @@
|
|||||||
NicovideoSearchIE,
|
NicovideoSearchIE,
|
||||||
NicovideoSearchURLIE,
|
NicovideoSearchURLIE,
|
||||||
NicovideoTagURLIE,
|
NicovideoTagURLIE,
|
||||||
|
NiconicoLiveIE,
|
||||||
)
|
)
|
||||||
from .ninecninemedia import (
|
from .ninecninemedia import (
|
||||||
NineCNineMediaIE,
|
NineCNineMediaIE,
|
||||||
@ -1278,6 +1340,7 @@
|
|||||||
from .ntvcojp import NTVCoJpCUIE
|
from .ntvcojp import NTVCoJpCUIE
|
||||||
from .ntvde import NTVDeIE
|
from .ntvde import NTVDeIE
|
||||||
from .ntvru import NTVRuIE
|
from .ntvru import NTVRuIE
|
||||||
|
from .nubilesporn import NubilesPornIE
|
||||||
from .nytimes import (
|
from .nytimes import (
|
||||||
NYTimesIE,
|
NYTimesIE,
|
||||||
NYTimesArticleIE,
|
NYTimesArticleIE,
|
||||||
@ -1285,8 +1348,10 @@
|
|||||||
)
|
)
|
||||||
from .nuvid import NuvidIE
|
from .nuvid import NuvidIE
|
||||||
from .nzherald import NZHeraldIE
|
from .nzherald import NZHeraldIE
|
||||||
|
from .nzonscreen import NZOnScreenIE
|
||||||
from .nzz import NZZIE
|
from .nzz import NZZIE
|
||||||
from .odatv import OdaTVIE
|
from .odatv import OdaTVIE
|
||||||
|
from .odkmedia import OnDemandChinaEpisodeIE
|
||||||
from .odnoklassniki import OdnoklassnikiIE
|
from .odnoklassniki import OdnoklassnikiIE
|
||||||
from .oftv import (
|
from .oftv import (
|
||||||
OfTVIE,
|
OfTVIE,
|
||||||
@ -1327,6 +1392,7 @@
|
|||||||
ORFIPTVIE,
|
ORFIPTVIE,
|
||||||
)
|
)
|
||||||
from .outsidetv import OutsideTVIE
|
from .outsidetv import OutsideTVIE
|
||||||
|
from .owncloud import OwnCloudIE
|
||||||
from .packtpub import (
|
from .packtpub import (
|
||||||
PacktPubIE,
|
PacktPubIE,
|
||||||
PacktPubCourseIE,
|
PacktPubCourseIE,
|
||||||
@ -1370,6 +1436,7 @@
|
|||||||
PeriscopeIE,
|
PeriscopeIE,
|
||||||
PeriscopeUserIE,
|
PeriscopeUserIE,
|
||||||
)
|
)
|
||||||
|
from .pgatour import PGATourIE
|
||||||
from .philharmoniedeparis import PhilharmonieDeParisIE
|
from .philharmoniedeparis import PhilharmonieDeParisIE
|
||||||
from .phoenix import PhoenixIE
|
from .phoenix import PhoenixIE
|
||||||
from .photobucket import PhotobucketIE
|
from .photobucket import PhotobucketIE
|
||||||
@ -1427,7 +1494,6 @@
|
|||||||
PolskieRadioPlayerIE,
|
PolskieRadioPlayerIE,
|
||||||
PolskieRadioPodcastIE,
|
PolskieRadioPodcastIE,
|
||||||
PolskieRadioPodcastListIE,
|
PolskieRadioPodcastListIE,
|
||||||
PolskieRadioRadioKierowcowIE,
|
|
||||||
)
|
)
|
||||||
from .popcorntimes import PopcorntimesIE
|
from .popcorntimes import PopcorntimesIE
|
||||||
from .popcorntv import PopcornTVIE
|
from .popcorntv import PopcornTVIE
|
||||||
@ -1450,6 +1516,7 @@
|
|||||||
PuhuTVIE,
|
PuhuTVIE,
|
||||||
PuhuTVSerieIE,
|
PuhuTVSerieIE,
|
||||||
)
|
)
|
||||||
|
from .pr0gramm import Pr0grammStaticIE, Pr0grammIE
|
||||||
from .prankcast import PrankCastIE
|
from .prankcast import PrankCastIE
|
||||||
from .premiershiprugby import PremiershipRugbyIE
|
from .premiershiprugby import PremiershipRugbyIE
|
||||||
from .presstv import PressTVIE
|
from .presstv import PressTVIE
|
||||||
@ -1496,19 +1563,24 @@
|
|||||||
RadLiveSeasonIE,
|
RadLiveSeasonIE,
|
||||||
)
|
)
|
||||||
from .rai import (
|
from .rai import (
|
||||||
|
RaiIE,
|
||||||
|
RaiCulturaIE,
|
||||||
RaiPlayIE,
|
RaiPlayIE,
|
||||||
RaiPlayPlaylistIE,
|
RaiPlayPlaylistIE,
|
||||||
RaiPlaySoundIE,
|
RaiPlaySoundIE,
|
||||||
RaiPlaySoundPlaylistIE,
|
RaiPlaySoundPlaylistIE,
|
||||||
RaiNewsIE,
|
RaiNewsIE,
|
||||||
RaiSudtirolIE,
|
RaiSudtirolIE,
|
||||||
RaiIE,
|
|
||||||
)
|
)
|
||||||
from .raywenderlich import (
|
from .raywenderlich import (
|
||||||
RayWenderlichIE,
|
RayWenderlichIE,
|
||||||
RayWenderlichCourseIE,
|
RayWenderlichCourseIE,
|
||||||
)
|
)
|
||||||
from .rbmaradio import RBMARadioIE
|
from .rbmaradio import RBMARadioIE
|
||||||
|
from .rbgtum import (
|
||||||
|
RbgTumIE,
|
||||||
|
RbgTumCourseIE,
|
||||||
|
)
|
||||||
from .rcs import (
|
from .rcs import (
|
||||||
RCSIE,
|
RCSIE,
|
||||||
RCSEmbedsIE,
|
RCSEmbedsIE,
|
||||||
@ -1520,6 +1592,7 @@
|
|||||||
RCTIPlusTVIE,
|
RCTIPlusTVIE,
|
||||||
)
|
)
|
||||||
from .rds import RDSIE
|
from .rds import RDSIE
|
||||||
|
from .recurbate import RecurbateIE
|
||||||
from .redbee import ParliamentLiveUKIE, RTBFIE
|
from .redbee import ParliamentLiveUKIE, RTBFIE
|
||||||
from .redbulltv import (
|
from .redbulltv import (
|
||||||
RedBullTVIE,
|
RedBullTVIE,
|
||||||
@ -1542,6 +1615,7 @@
|
|||||||
from .restudy import RestudyIE
|
from .restudy import RestudyIE
|
||||||
from .reuters import ReutersIE
|
from .reuters import ReutersIE
|
||||||
from .reverbnation import ReverbNationIE
|
from .reverbnation import ReverbNationIE
|
||||||
|
from .rheinmaintv import RheinMainTVIE
|
||||||
from .rice import RICEIE
|
from .rice import RICEIE
|
||||||
from .rmcdecouverte import RMCDecouverteIE
|
from .rmcdecouverte import RMCDecouverteIE
|
||||||
from .rockstargames import RockstarGamesIE
|
from .rockstargames import RockstarGamesIE
|
||||||
@ -1556,6 +1630,7 @@
|
|||||||
from .rozhlas import (
|
from .rozhlas import (
|
||||||
RozhlasIE,
|
RozhlasIE,
|
||||||
RozhlasVltavaIE,
|
RozhlasVltavaIE,
|
||||||
|
MujRozhlasIE,
|
||||||
)
|
)
|
||||||
from .rte import RteIE, RteRadioIE
|
from .rte import RteIE, RteRadioIE
|
||||||
from .rtlnl import (
|
from .rtlnl import (
|
||||||
@ -1579,6 +1654,11 @@
|
|||||||
from .rtp import RTPIE
|
from .rtp import RTPIE
|
||||||
from .rtrfm import RTRFMIE
|
from .rtrfm import RTRFMIE
|
||||||
from .rts import RTSIE
|
from .rts import RTSIE
|
||||||
|
from .rtvcplay import (
|
||||||
|
RTVCPlayIE,
|
||||||
|
RTVCPlayEmbedIE,
|
||||||
|
RTVCKalturaIE,
|
||||||
|
)
|
||||||
from .rtve import (
|
from .rtve import (
|
||||||
RTVEALaCartaIE,
|
RTVEALaCartaIE,
|
||||||
RTVEAudioIE,
|
RTVEAudioIE,
|
||||||
@ -1648,6 +1728,7 @@
|
|||||||
)
|
)
|
||||||
from .scrolller import ScrolllerIE
|
from .scrolller import ScrolllerIE
|
||||||
from .seeker import SeekerIE
|
from .seeker import SeekerIE
|
||||||
|
from .senalcolombia import SenalColombiaLiveIE
|
||||||
from .senategov import SenateISVPIE, SenateGovIE
|
from .senategov import SenateISVPIE, SenateGovIE
|
||||||
from .sendtonews import SendtoNewsIE
|
from .sendtonews import SendtoNewsIE
|
||||||
from .servus import ServusIE
|
from .servus import ServusIE
|
||||||
@ -1745,6 +1826,7 @@
|
|||||||
BellatorIE,
|
BellatorIE,
|
||||||
ParamountNetworkIE,
|
ParamountNetworkIE,
|
||||||
)
|
)
|
||||||
|
from .stageplus import StagePlusVODConcertIE
|
||||||
from .startrek import StarTrekIE
|
from .startrek import StarTrekIE
|
||||||
from .stitcher import (
|
from .stitcher import (
|
||||||
StitcherIE,
|
StitcherIE,
|
||||||
@ -1820,7 +1902,10 @@
|
|||||||
TeacherTubeUserIE,
|
TeacherTubeUserIE,
|
||||||
)
|
)
|
||||||
from .teachingchannel import TeachingChannelIE
|
from .teachingchannel import TeachingChannelIE
|
||||||
from .teamcoco import TeamcocoIE
|
from .teamcoco import (
|
||||||
|
TeamcocoIE,
|
||||||
|
ConanClassicIE,
|
||||||
|
)
|
||||||
from .teamtreehouse import TeamTreeHouseIE
|
from .teamtreehouse import TeamTreeHouseIE
|
||||||
from .techtalks import TechTalksIE
|
from .techtalks import TechTalksIE
|
||||||
from .ted import (
|
from .ted import (
|
||||||
@ -1832,6 +1917,7 @@
|
|||||||
from .tele5 import Tele5IE
|
from .tele5 import Tele5IE
|
||||||
from .tele13 import Tele13IE
|
from .tele13 import Tele13IE
|
||||||
from .telebruxelles import TeleBruxellesIE
|
from .telebruxelles import TeleBruxellesIE
|
||||||
|
from .telecaribe import TelecaribePlayIE
|
||||||
from .telecinco import TelecincoIE
|
from .telecinco import TelecincoIE
|
||||||
from .telegraaf import TelegraafIE
|
from .telegraaf import TelegraafIE
|
||||||
from .telegram import TelegramEmbedIE
|
from .telegram import TelegramEmbedIE
|
||||||
@ -1846,7 +1932,7 @@
|
|||||||
)
|
)
|
||||||
from .teletask import TeleTaskIE
|
from .teletask import TeleTaskIE
|
||||||
from .telewebion import TelewebionIE
|
from .telewebion import TelewebionIE
|
||||||
from .tempo import TempoIE
|
from .tempo import TempoIE, IVXPlayerIE
|
||||||
from .tencent import (
|
from .tencent import (
|
||||||
IflixEpisodeIE,
|
IflixEpisodeIE,
|
||||||
IflixSeriesIE,
|
IflixSeriesIE,
|
||||||
@ -1923,6 +2009,7 @@
|
|||||||
from .triller import (
|
from .triller import (
|
||||||
TrillerIE,
|
TrillerIE,
|
||||||
TrillerUserIE,
|
TrillerUserIE,
|
||||||
|
TrillerShortIE,
|
||||||
)
|
)
|
||||||
from .trilulilu import TriluliluIE
|
from .trilulilu import TriluliluIE
|
||||||
from .trovo import (
|
from .trovo import (
|
||||||
@ -1944,10 +2031,9 @@
|
|||||||
)
|
)
|
||||||
from .tumblr import TumblrIE
|
from .tumblr import TumblrIE
|
||||||
from .tunein import (
|
from .tunein import (
|
||||||
TuneInClipIE,
|
|
||||||
TuneInStationIE,
|
TuneInStationIE,
|
||||||
TuneInProgramIE,
|
TuneInPodcastIE,
|
||||||
TuneInTopicIE,
|
TuneInPodcastEpisodeIE,
|
||||||
TuneInShortenerIE,
|
TuneInShortenerIE,
|
||||||
)
|
)
|
||||||
from .tunepk import TunePkIE
|
from .tunepk import TunePkIE
|
||||||
@ -2015,7 +2101,6 @@
|
|||||||
)
|
)
|
||||||
from .tvplay import (
|
from .tvplay import (
|
||||||
TVPlayIE,
|
TVPlayIE,
|
||||||
ViafreeIE,
|
|
||||||
TVPlayHomeIE,
|
TVPlayHomeIE,
|
||||||
)
|
)
|
||||||
from .tvplayer import TVPlayerIE
|
from .tvplayer import TVPlayerIE
|
||||||
@ -2045,6 +2130,10 @@
|
|||||||
TwitterSpacesIE,
|
TwitterSpacesIE,
|
||||||
TwitterShortenerIE,
|
TwitterShortenerIE,
|
||||||
)
|
)
|
||||||
|
from .txxx import (
|
||||||
|
TxxxIE,
|
||||||
|
PornTopIE,
|
||||||
|
)
|
||||||
from .udemy import (
|
from .udemy import (
|
||||||
UdemyIE,
|
UdemyIE,
|
||||||
UdemyCourseIE
|
UdemyCourseIE
|
||||||
@ -2170,17 +2259,14 @@
|
|||||||
ViuIE,
|
ViuIE,
|
||||||
ViuPlaylistIE,
|
ViuPlaylistIE,
|
||||||
ViuOTTIE,
|
ViuOTTIE,
|
||||||
|
ViuOTTIndonesiaIE,
|
||||||
)
|
)
|
||||||
from .vk import (
|
from .vk import (
|
||||||
VKIE,
|
VKIE,
|
||||||
VKUserVideosIE,
|
VKUserVideosIE,
|
||||||
VKWallPostIE,
|
VKWallPostIE,
|
||||||
)
|
)
|
||||||
from .vlive import (
|
from .vocaroo import VocarooIE
|
||||||
VLiveIE,
|
|
||||||
VLivePostIE,
|
|
||||||
VLiveChannelIE,
|
|
||||||
)
|
|
||||||
from .vodlocker import VodlockerIE
|
from .vodlocker import VodlockerIE
|
||||||
from .vodpl import VODPlIE
|
from .vodpl import VODPlIE
|
||||||
from .vodplatform import VODPlatformIE
|
from .vodplatform import VODPlatformIE
|
||||||
@ -2198,7 +2284,12 @@
|
|||||||
VoxMediaVolumeIE,
|
VoxMediaVolumeIE,
|
||||||
VoxMediaIE,
|
VoxMediaIE,
|
||||||
)
|
)
|
||||||
from .vrt import VRTIE
|
from .vrt import (
|
||||||
|
VRTIE,
|
||||||
|
VrtNUIE,
|
||||||
|
KetnetIE,
|
||||||
|
DagelijkseKostIE,
|
||||||
|
)
|
||||||
from .vrak import VrakIE
|
from .vrak import VrakIE
|
||||||
from .vrv import (
|
from .vrv import (
|
||||||
VRVIE,
|
VRVIE,
|
||||||
@ -2249,6 +2340,17 @@
|
|||||||
WeiboMobileIE
|
WeiboMobileIE
|
||||||
)
|
)
|
||||||
from .weiqitv import WeiqiTVIE
|
from .weiqitv import WeiqiTVIE
|
||||||
|
from .weverse import (
|
||||||
|
WeverseIE,
|
||||||
|
WeverseMediaIE,
|
||||||
|
WeverseMomentIE,
|
||||||
|
WeverseLiveTabIE,
|
||||||
|
WeverseMediaTabIE,
|
||||||
|
WeverseLiveIE,
|
||||||
|
)
|
||||||
|
from .wevidi import WeVidiIE
|
||||||
|
from .weyyak import WeyyakIE
|
||||||
|
from .whyp import WhypIE
|
||||||
from .wikimedia import WikimediaIE
|
from .wikimedia import WikimediaIE
|
||||||
from .willow import WillowIE
|
from .willow import WillowIE
|
||||||
from .wimtv import WimTVIE
|
from .wimtv import WimTVIE
|
||||||
@ -2267,11 +2369,21 @@
|
|||||||
WPPilotIE,
|
WPPilotIE,
|
||||||
WPPilotChannelsIE,
|
WPPilotChannelsIE,
|
||||||
)
|
)
|
||||||
|
from .wrestleuniverse import (
|
||||||
|
WrestleUniverseVODIE,
|
||||||
|
WrestleUniversePPVIE,
|
||||||
|
)
|
||||||
from .wsj import (
|
from .wsj import (
|
||||||
WSJIE,
|
WSJIE,
|
||||||
WSJArticleIE,
|
WSJArticleIE,
|
||||||
)
|
)
|
||||||
from .wwe import WWEIE
|
from .wwe import WWEIE
|
||||||
|
from .wykop import (
|
||||||
|
WykopDigIE,
|
||||||
|
WykopDigCommentIE,
|
||||||
|
WykopPostIE,
|
||||||
|
WykopPostCommentIE,
|
||||||
|
)
|
||||||
from .xanimu import XanimuIE
|
from .xanimu import XanimuIE
|
||||||
from .xbef import XBefIE
|
from .xbef import XBefIE
|
||||||
from .xboxclips import XboxClipsIE
|
from .xboxclips import XboxClipsIE
|
||||||
@ -2291,13 +2403,14 @@
|
|||||||
from .xstream import XstreamIE
|
from .xstream import XstreamIE
|
||||||
from .xtube import XTubeUserIE, XTubeIE
|
from .xtube import XTubeUserIE, XTubeIE
|
||||||
from .xuite import XuiteIE
|
from .xuite import XuiteIE
|
||||||
from .xvideos import XVideosIE
|
from .xvideos import (
|
||||||
|
XVideosIE,
|
||||||
|
XVideosQuickiesIE
|
||||||
|
)
|
||||||
from .xxxymovies import XXXYMoviesIE
|
from .xxxymovies import XXXYMoviesIE
|
||||||
from .yahoo import (
|
from .yahoo import (
|
||||||
YahooIE,
|
YahooIE,
|
||||||
YahooSearchIE,
|
YahooSearchIE,
|
||||||
YahooGyaOPlayerIE,
|
|
||||||
YahooGyaOIE,
|
|
||||||
YahooJapanNewsIE,
|
YahooJapanNewsIE,
|
||||||
)
|
)
|
||||||
from .yandexdisk import YandexDiskIE
|
from .yandexdisk import YandexDiskIE
|
||||||
@ -2315,6 +2428,10 @@
|
|||||||
ZenYandexChannelIE,
|
ZenYandexChannelIE,
|
||||||
)
|
)
|
||||||
from .yapfiles import YapFilesIE
|
from .yapfiles import YapFilesIE
|
||||||
|
from .yappy import (
|
||||||
|
YappyIE,
|
||||||
|
YappyProfileIE,
|
||||||
|
)
|
||||||
from .yesjapan import YesJapanIE
|
from .yesjapan import YesJapanIE
|
||||||
from .yinyuetai import YinYueTaiIE
|
from .yinyuetai import YinYueTaiIE
|
||||||
from .yle_areena import YleAreenaIE
|
from .yle_areena import YleAreenaIE
|
||||||
@ -2332,6 +2449,10 @@
|
|||||||
from .youporn import YouPornIE
|
from .youporn import YouPornIE
|
||||||
from .yourporn import YourPornIE
|
from .yourporn import YourPornIE
|
||||||
from .yourupload import YourUploadIE
|
from .yourupload import YourUploadIE
|
||||||
|
from .zaiko import (
|
||||||
|
ZaikoIE,
|
||||||
|
ZaikoETicketIE,
|
||||||
|
)
|
||||||
from .zapiks import ZapiksIE
|
from .zapiks import ZapiksIE
|
||||||
from .zattoo import (
|
from .zattoo import (
|
||||||
BBVTVIE,
|
BBVTVIE,
|
||||||
@ -2389,6 +2510,7 @@
|
|||||||
ZingMp3WeekChartIE,
|
ZingMp3WeekChartIE,
|
||||||
ZingMp3ChartMusicVideoIE,
|
ZingMp3ChartMusicVideoIE,
|
||||||
ZingMp3UserIE,
|
ZingMp3UserIE,
|
||||||
|
ZingMp3HubIE,
|
||||||
)
|
)
|
||||||
from .zoom import ZoomIE
|
from .zoom import ZoomIE
|
||||||
from .zype import ZypeIE
|
from .zype import ZypeIE
|
||||||
|
@ -156,7 +156,7 @@ class AbemaTVBaseIE(InfoExtractor):
|
|||||||
def _generate_aks(cls, deviceid):
|
def _generate_aks(cls, deviceid):
|
||||||
deviceid = deviceid.encode('utf-8')
|
deviceid = deviceid.encode('utf-8')
|
||||||
# add 1 hour and then drop minute and secs
|
# add 1 hour and then drop minute and secs
|
||||||
ts_1hour = int((time_seconds(hours=9) // 3600 + 1) * 3600)
|
ts_1hour = int((time_seconds() // 3600 + 1) * 3600)
|
||||||
time_struct = time.gmtime(ts_1hour)
|
time_struct = time.gmtime(ts_1hour)
|
||||||
ts_1hour_str = str(ts_1hour).encode('utf-8')
|
ts_1hour_str = str(ts_1hour).encode('utf-8')
|
||||||
|
|
||||||
@ -190,6 +190,16 @@ def _get_device_token(self):
|
|||||||
if self._USERTOKEN:
|
if self._USERTOKEN:
|
||||||
return self._USERTOKEN
|
return self._USERTOKEN
|
||||||
|
|
||||||
|
username, _ = self._get_login_info()
|
||||||
|
AbemaTVBaseIE._USERTOKEN = username and self.cache.load(self._NETRC_MACHINE, username)
|
||||||
|
if AbemaTVBaseIE._USERTOKEN:
|
||||||
|
# try authentication with locally stored token
|
||||||
|
try:
|
||||||
|
self._get_media_token(True)
|
||||||
|
return
|
||||||
|
except ExtractorError as e:
|
||||||
|
self.report_warning(f'Failed to login with cached user token; obtaining a fresh one ({e})')
|
||||||
|
|
||||||
AbemaTVBaseIE._DEVICE_ID = str(uuid.uuid4())
|
AbemaTVBaseIE._DEVICE_ID = str(uuid.uuid4())
|
||||||
aks = self._generate_aks(self._DEVICE_ID)
|
aks = self._generate_aks(self._DEVICE_ID)
|
||||||
user_data = self._download_json(
|
user_data = self._download_json(
|
||||||
@ -300,6 +310,11 @@ class AbemaTVIE(AbemaTVBaseIE):
|
|||||||
_TIMETABLE = None
|
_TIMETABLE = None
|
||||||
|
|
||||||
def _perform_login(self, username, password):
|
def _perform_login(self, username, password):
|
||||||
|
self._get_device_token()
|
||||||
|
if self.cache.load(self._NETRC_MACHINE, username) and self._get_media_token():
|
||||||
|
self.write_debug('Skipping logging in')
|
||||||
|
return
|
||||||
|
|
||||||
if '@' in username: # don't strictly check if it's email address or not
|
if '@' in username: # don't strictly check if it's email address or not
|
||||||
ep, method = 'user/email', 'email'
|
ep, method = 'user/email', 'email'
|
||||||
else:
|
else:
|
||||||
@ -319,6 +334,7 @@ def _perform_login(self, username, password):
|
|||||||
|
|
||||||
AbemaTVBaseIE._USERTOKEN = login_response['token']
|
AbemaTVBaseIE._USERTOKEN = login_response['token']
|
||||||
self._get_media_token(True)
|
self._get_media_token(True)
|
||||||
|
self.cache.store(self._NETRC_MACHINE, username, AbemaTVBaseIE._USERTOKEN)
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
# starting download using infojson from this extractor is undefined behavior,
|
# starting download using infojson from this extractor is undefined behavior,
|
||||||
@ -416,10 +432,20 @@ def _real_extract(self, url):
|
|||||||
f'https://api.abema.io/v1/video/programs/{video_id}', video_id,
|
f'https://api.abema.io/v1/video/programs/{video_id}', video_id,
|
||||||
note='Checking playability',
|
note='Checking playability',
|
||||||
headers=headers)
|
headers=headers)
|
||||||
ondemand_types = traverse_obj(api_response, ('terms', ..., 'onDemandType'), default=[])
|
ondemand_types = traverse_obj(api_response, ('terms', ..., 'onDemandType'))
|
||||||
if 3 not in ondemand_types:
|
if 3 not in ondemand_types:
|
||||||
# cannot acquire decryption key for these streams
|
# cannot acquire decryption key for these streams
|
||||||
self.report_warning('This is a premium-only stream')
|
self.report_warning('This is a premium-only stream')
|
||||||
|
info.update(traverse_obj(api_response, {
|
||||||
|
'series': ('series', 'title'),
|
||||||
|
'season': ('season', 'title'),
|
||||||
|
'season_number': ('season', 'sequence'),
|
||||||
|
'episode_number': ('episode', 'number'),
|
||||||
|
}))
|
||||||
|
if not title:
|
||||||
|
title = traverse_obj(api_response, ('episode', 'title'))
|
||||||
|
if not description:
|
||||||
|
description = traverse_obj(api_response, ('episode', 'content'))
|
||||||
|
|
||||||
m3u8_url = f'https://vod-abematv.akamaized.net/program/{video_id}/playlist.m3u8'
|
m3u8_url = f'https://vod-abematv.akamaized.net/program/{video_id}/playlist.m3u8'
|
||||||
elif video_type == 'slots':
|
elif video_type == 'slots':
|
||||||
@ -489,7 +515,7 @@ def _fetch_page(self, playlist_id, series_version, page):
|
|||||||
})
|
})
|
||||||
yield from (
|
yield from (
|
||||||
self.url_result(f'https://abema.tv/video/episode/{x}')
|
self.url_result(f'https://abema.tv/video/episode/{x}')
|
||||||
for x in traverse_obj(programs, ('programs', ..., 'id'), default=[]))
|
for x in traverse_obj(programs, ('programs', ..., 'id')))
|
||||||
|
|
||||||
def _entries(self, playlist_id, series_version):
|
def _entries(self, playlist_id, series_version):
|
||||||
return OnDemandPagedList(
|
return OnDemandPagedList(
|
||||||
|
@ -40,28 +40,33 @@ def _call_api(self, path, video_id, query=None):
|
|||||||
|
|
||||||
class ACastIE(ACastBaseIE):
|
class ACastIE(ACastBaseIE):
|
||||||
IE_NAME = 'acast'
|
IE_NAME = 'acast'
|
||||||
_VALID_URL = r'''(?x)
|
_VALID_URL = r'''(?x:
|
||||||
https?://
|
https?://
|
||||||
(?:
|
(?:
|
||||||
(?:(?:embed|www)\.)?acast\.com/|
|
(?:(?:embed|www)\.)?acast\.com/|
|
||||||
play\.acast\.com/s/
|
play\.acast\.com/s/
|
||||||
)
|
)
|
||||||
(?P<channel>[^/]+)/(?P<id>[^/#?]+)
|
(?P<channel>[^/]+)/(?P<id>[^/#?"]+)
|
||||||
'''
|
)'''
|
||||||
|
_EMBED_REGEX = [rf'(?x)<iframe[^>]+\bsrc=[\'"](?P<url>{_VALID_URL})']
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'https://www.acast.com/sparpodcast/2.raggarmordet-rosterurdetforflutna',
|
'url': 'https://www.acast.com/sparpodcast/2.raggarmordet-rosterurdetforflutna',
|
||||||
'md5': 'f5598f3ad1e4776fed12ec1407153e4b',
|
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': '2a92b283-1a75-4ad8-8396-499c641de0d9',
|
'id': '2a92b283-1a75-4ad8-8396-499c641de0d9',
|
||||||
'ext': 'mp3',
|
'ext': 'mp3',
|
||||||
'title': '2. Raggarmordet - Röster ur det förflutna',
|
'title': '2. Raggarmordet - Röster ur det förflutna',
|
||||||
'description': 'md5:a992ae67f4d98f1c0141598f7bebbf67',
|
'description': 'md5:013959207e05011ad14a222cf22278cc',
|
||||||
'timestamp': 1477346700,
|
'timestamp': 1477346700,
|
||||||
'upload_date': '20161024',
|
'upload_date': '20161024',
|
||||||
'duration': 2766,
|
'duration': 2766,
|
||||||
'creator': 'Anton Berg & Martin Johnson',
|
'creator': 'Third Ear Studio',
|
||||||
'series': 'Spår',
|
'series': 'Spår',
|
||||||
'episode': '2. Raggarmordet - Röster ur det förflutna',
|
'episode': '2. Raggarmordet - Röster ur det förflutna',
|
||||||
|
'thumbnail': 'https://assets.pippa.io/shows/616ebe1886d7b1398620b943/616ebe33c7e6e70013cae7da.jpg',
|
||||||
|
'episode_number': 2,
|
||||||
|
'display_id': '2.raggarmordet-rosterurdetforflutna',
|
||||||
|
'season_number': 4,
|
||||||
|
'season': 'Season 4',
|
||||||
}
|
}
|
||||||
}, {
|
}, {
|
||||||
'url': 'http://embed.acast.com/adambuxton/ep.12-adam-joeschristmaspodcast2015',
|
'url': 'http://embed.acast.com/adambuxton/ep.12-adam-joeschristmaspodcast2015',
|
||||||
@ -73,6 +78,23 @@ class ACastIE(ACastBaseIE):
|
|||||||
'url': 'https://play.acast.com/s/sparpodcast/2a92b283-1a75-4ad8-8396-499c641de0d9',
|
'url': 'https://play.acast.com/s/sparpodcast/2a92b283-1a75-4ad8-8396-499c641de0d9',
|
||||||
'only_matching': True,
|
'only_matching': True,
|
||||||
}]
|
}]
|
||||||
|
_WEBPAGE_TESTS = [{
|
||||||
|
'url': 'https://ausi.anu.edu.au/news/democracy-sausage-episode-can-labor-be-long-form-government',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '646c68fb21fbf20011e9c651',
|
||||||
|
'ext': 'mp3',
|
||||||
|
'creator': 'The Australian National University',
|
||||||
|
'display_id': 'can-labor-be-a-long-form-government',
|
||||||
|
'duration': 2618,
|
||||||
|
'thumbnail': 'https://assets.pippa.io/shows/6113e8578b4903809f16f7e5/1684821529295-515b9520db9ce53275b995eb302f941c.jpeg',
|
||||||
|
'title': 'Can Labor be a long-form government?',
|
||||||
|
'episode': 'Can Labor be a long-form government?',
|
||||||
|
'upload_date': '20230523',
|
||||||
|
'series': 'Democracy Sausage with Mark Kenny',
|
||||||
|
'timestamp': 1684826362,
|
||||||
|
'description': 'md5:feabe1fc5004c78ee59c84a46bf4ba16',
|
||||||
|
}
|
||||||
|
}]
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
channel, display_id = self._match_valid_url(url).groups()
|
channel, display_id = self._match_valid_url(url).groups()
|
||||||
|
@ -1573,7 +1573,7 @@ def extract_redirect_url(html, url=None, fatal=False):
|
|||||||
}), headers={
|
}), headers={
|
||||||
'Content-Type': 'application/x-www-form-urlencoded'
|
'Content-Type': 'application/x-www-form-urlencoded'
|
||||||
})
|
})
|
||||||
elif mso_id == 'Spectrum':
|
elif mso_id in ('Spectrum', 'Charter_Direct'):
|
||||||
# Spectrum's login for is dynamically loaded via JS so we need to hardcode the flow
|
# Spectrum's login for is dynamically loaded via JS so we need to hardcode the flow
|
||||||
# as a one-off implementation.
|
# as a one-off implementation.
|
||||||
provider_redirect_page, urlh = provider_redirect_page_res
|
provider_redirect_page, urlh = provider_redirect_page_res
|
||||||
|
@ -3,6 +3,8 @@
|
|||||||
ExtractorError,
|
ExtractorError,
|
||||||
GeoRestrictedError,
|
GeoRestrictedError,
|
||||||
int_or_none,
|
int_or_none,
|
||||||
|
remove_start,
|
||||||
|
traverse_obj,
|
||||||
update_url_query,
|
update_url_query,
|
||||||
urlencode_postdata,
|
urlencode_postdata,
|
||||||
)
|
)
|
||||||
@ -72,7 +74,14 @@ def _extract_aetn_info(self, domain, filter_key, filter_value, url):
|
|||||||
requestor_id, brand = self._DOMAIN_MAP[domain]
|
requestor_id, brand = self._DOMAIN_MAP[domain]
|
||||||
result = self._download_json(
|
result = self._download_json(
|
||||||
'https://feeds.video.aetnd.com/api/v2/%s/videos' % brand,
|
'https://feeds.video.aetnd.com/api/v2/%s/videos' % brand,
|
||||||
filter_value, query={'filter[%s]' % filter_key: filter_value})['results'][0]
|
filter_value, query={'filter[%s]' % filter_key: filter_value})
|
||||||
|
result = traverse_obj(
|
||||||
|
result, ('results',
|
||||||
|
lambda k, v: k == 0 and v[filter_key] == filter_value),
|
||||||
|
get_all=False)
|
||||||
|
if not result:
|
||||||
|
raise ExtractorError('Show not found in A&E feed (too new?)', expected=True,
|
||||||
|
video_id=remove_start(filter_value, '/'))
|
||||||
title = result['title']
|
title = result['title']
|
||||||
video_id = result['id']
|
video_id = result['id']
|
||||||
media_url = result['publicUrl']
|
media_url = result['publicUrl']
|
||||||
@ -123,7 +132,7 @@ class AENetworksIE(AENetworksBaseIE):
|
|||||||
'skip_download': True,
|
'skip_download': True,
|
||||||
},
|
},
|
||||||
'add_ie': ['ThePlatform'],
|
'add_ie': ['ThePlatform'],
|
||||||
'skip': 'This video is only available for users of participating TV providers.',
|
'skip': 'Geo-restricted - This content is not available in your location.'
|
||||||
}, {
|
}, {
|
||||||
'url': 'http://www.aetv.com/shows/duck-dynasty/season-9/episode-1',
|
'url': 'http://www.aetv.com/shows/duck-dynasty/season-9/episode-1',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
@ -140,6 +149,7 @@ class AENetworksIE(AENetworksBaseIE):
|
|||||||
'skip_download': True,
|
'skip_download': True,
|
||||||
},
|
},
|
||||||
'add_ie': ['ThePlatform'],
|
'add_ie': ['ThePlatform'],
|
||||||
|
'skip': 'This video is only available for users of participating TV providers.',
|
||||||
}, {
|
}, {
|
||||||
'url': 'http://www.fyi.tv/shows/tiny-house-nation/season-1/episode-8',
|
'url': 'http://www.fyi.tv/shows/tiny-house-nation/season-1/episode-8',
|
||||||
'only_matching': True
|
'only_matching': True
|
||||||
@ -303,6 +313,7 @@ def _real_extract(self, url):
|
|||||||
class HistoryPlayerIE(AENetworksBaseIE):
|
class HistoryPlayerIE(AENetworksBaseIE):
|
||||||
IE_NAME = 'history:player'
|
IE_NAME = 'history:player'
|
||||||
_VALID_URL = r'https?://(?:www\.)?(?P<domain>(?:history|biography)\.com)/player/(?P<id>\d+)'
|
_VALID_URL = r'https?://(?:www\.)?(?P<domain>(?:history|biography)\.com)/player/(?P<id>\d+)'
|
||||||
|
_TESTS = []
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
domain, video_id = self._match_valid_url(url).groups()
|
domain, video_id = self._match_valid_url(url).groups()
|
||||||
|
@ -1,5 +1,6 @@
|
|||||||
from .common import InfoExtractor
|
from .common import InfoExtractor
|
||||||
from .vimeo import VimeoIE
|
from .vimeo import VimeoIE
|
||||||
|
from ..utils import ExtractorError, traverse_obj, url_or_none
|
||||||
|
|
||||||
|
|
||||||
class AeonCoIE(InfoExtractor):
|
class AeonCoIE(InfoExtractor):
|
||||||
@ -19,22 +20,55 @@ class AeonCoIE(InfoExtractor):
|
|||||||
}
|
}
|
||||||
}, {
|
}, {
|
||||||
'url': 'https://aeon.co/videos/dazzling-timelapse-shows-how-microbes-spoil-our-food-and-sometimes-enrich-it',
|
'url': 'https://aeon.co/videos/dazzling-timelapse-shows-how-microbes-spoil-our-food-and-sometimes-enrich-it',
|
||||||
'md5': '4e5f3dad9dbda0dbfa2da41a851e631e',
|
'md5': '03582d795382e49f2fd0b427b55de409',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': '728595228',
|
'id': '759576926',
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
'title': 'Wrought',
|
'title': 'Wrought',
|
||||||
'thumbnail': 'https://i.vimeocdn.com/video/1484618528-c91452611f9a4e4497735a533da60d45b2fe472deb0c880f0afaab0cd2efb22a-d_1280',
|
'thumbnail': 'https://i.vimeocdn.com/video/1525599692-84614af88e446612f49ca966cf8f80eab2c73376bedd80555741c521c26f9a3e-d_1280',
|
||||||
'uploader': 'Biofilm Productions',
|
'uploader': 'Aeon Video',
|
||||||
'uploader_id': 'user140352216',
|
'uploader_id': 'aeonvideo',
|
||||||
'uploader_url': 'https://vimeo.com/user140352216',
|
'uploader_url': 'https://vimeo.com/aeonvideo',
|
||||||
'duration': 1344
|
'duration': 1344
|
||||||
}
|
}
|
||||||
|
}, {
|
||||||
|
'url': 'https://aeon.co/videos/chew-over-the-prisoners-dilemma-and-see-if-you-can-find-the-rational-path-out',
|
||||||
|
'md5': '1cfda0bf3ae24df17d00f2c0cb6cc21b',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'emyi4z-O0ls',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'How to outsmart the Prisoner’s Dilemma - Lucas Husted',
|
||||||
|
'thumbnail': 'https://i.ytimg.com/vi_webp/emyi4z-O0ls/maxresdefault.webp',
|
||||||
|
'uploader': 'TED-Ed',
|
||||||
|
'uploader_id': '@TEDEd',
|
||||||
|
'uploader_url': 'https://www.youtube.com/@TEDEd',
|
||||||
|
'duration': 344,
|
||||||
|
'upload_date': '20200827',
|
||||||
|
'channel_id': 'UCsooa4yRKGN_zEE8iknghZA',
|
||||||
|
'playable_in_embed': True,
|
||||||
|
'description': 'md5:c0959524f08cb60f96fd010f3dfb17f3',
|
||||||
|
'categories': ['Education'],
|
||||||
|
'like_count': int,
|
||||||
|
'channel': 'TED-Ed',
|
||||||
|
'chapters': 'count:7',
|
||||||
|
'channel_url': 'https://www.youtube.com/channel/UCsooa4yRKGN_zEE8iknghZA',
|
||||||
|
'tags': 'count:26',
|
||||||
|
'availability': 'public',
|
||||||
|
'channel_follower_count': int,
|
||||||
|
'view_count': int,
|
||||||
|
'age_limit': 0,
|
||||||
|
'live_status': 'not_live',
|
||||||
|
'comment_count': int,
|
||||||
|
},
|
||||||
}]
|
}]
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
video_id = self._match_id(url)
|
video_id = self._match_id(url)
|
||||||
webpage = self._download_webpage(url, video_id)
|
webpage = self._download_webpage(url, video_id)
|
||||||
vimeo_id = self._search_regex(r'hosterId":\s*"(?P<id>[0-9]+)', webpage, 'vimeo id')
|
embed_url = traverse_obj(self._yield_json_ld(webpage, video_id), (
|
||||||
vimeo_url = VimeoIE._smuggle_referrer(f'https://player.vimeo.com/video/{vimeo_id}', 'https://aeon.co')
|
lambda _, v: v['@type'] == 'VideoObject', 'embedUrl', {url_or_none}), get_all=False)
|
||||||
return self.url_result(vimeo_url, VimeoIE)
|
if not embed_url:
|
||||||
|
raise ExtractorError('No embed URL found in webpage')
|
||||||
|
if 'player.vimeo.com' in embed_url:
|
||||||
|
embed_url = VimeoIE._smuggle_referrer(embed_url, 'https://aeon.co/')
|
||||||
|
return self.url_result(embed_url)
|
||||||
|
@ -76,59 +76,6 @@ class AfreecaTVIE(InfoExtractor):
|
|||||||
},
|
},
|
||||||
}],
|
}],
|
||||||
'skip': 'Video is gone',
|
'skip': 'Video is gone',
|
||||||
}, {
|
|
||||||
'url': 'http://vod.afreecatv.com/PLAYER/STATION/18650793',
|
|
||||||
'info_dict': {
|
|
||||||
'id': '18650793',
|
|
||||||
'ext': 'mp4',
|
|
||||||
'title': '오늘은 다르다! 쏘님의 우월한 위아래~ 댄스리액션!',
|
|
||||||
'thumbnail': r're:^https?://.*\.jpg$',
|
|
||||||
'uploader': '윈아디',
|
|
||||||
'uploader_id': 'badkids',
|
|
||||||
'duration': 107,
|
|
||||||
},
|
|
||||||
'params': {
|
|
||||||
'skip_download': True,
|
|
||||||
},
|
|
||||||
}, {
|
|
||||||
'url': 'http://vod.afreecatv.com/PLAYER/STATION/10481652',
|
|
||||||
'info_dict': {
|
|
||||||
'id': '10481652',
|
|
||||||
'title': "BJ유트루와 함께하는 '팅커벨 메이크업!'",
|
|
||||||
'thumbnail': 're:^https?://(?:video|st)img.afreecatv.com/.*$',
|
|
||||||
'uploader': 'dailyapril',
|
|
||||||
'uploader_id': 'dailyapril',
|
|
||||||
'duration': 6492,
|
|
||||||
},
|
|
||||||
'playlist_count': 2,
|
|
||||||
'playlist': [{
|
|
||||||
'md5': 'd8b7c174568da61d774ef0203159bf97',
|
|
||||||
'info_dict': {
|
|
||||||
'id': '20160502_c4c62b9d_174361386_1',
|
|
||||||
'ext': 'mp4',
|
|
||||||
'title': "BJ유트루와 함께하는 '팅커벨 메이크업!' (part 1)",
|
|
||||||
'thumbnail': 're:^https?://(?:video|st)img.afreecatv.com/.*$',
|
|
||||||
'uploader': 'dailyapril',
|
|
||||||
'uploader_id': 'dailyapril',
|
|
||||||
'upload_date': '20160502',
|
|
||||||
'duration': 3601,
|
|
||||||
},
|
|
||||||
}, {
|
|
||||||
'md5': '58f2ce7f6044e34439ab2d50612ab02b',
|
|
||||||
'info_dict': {
|
|
||||||
'id': '20160502_39e739bb_174361386_2',
|
|
||||||
'ext': 'mp4',
|
|
||||||
'title': "BJ유트루와 함께하는 '팅커벨 메이크업!' (part 2)",
|
|
||||||
'thumbnail': 're:^https?://(?:video|st)img.afreecatv.com/.*$',
|
|
||||||
'uploader': 'dailyapril',
|
|
||||||
'uploader_id': 'dailyapril',
|
|
||||||
'upload_date': '20160502',
|
|
||||||
'duration': 2891,
|
|
||||||
},
|
|
||||||
}],
|
|
||||||
'params': {
|
|
||||||
'skip_download': True,
|
|
||||||
},
|
|
||||||
}, {
|
}, {
|
||||||
# non standard key
|
# non standard key
|
||||||
'url': 'http://vod.afreecatv.com/PLAYER/STATION/20515605',
|
'url': 'http://vod.afreecatv.com/PLAYER/STATION/20515605',
|
||||||
@ -146,8 +93,8 @@ class AfreecaTVIE(InfoExtractor):
|
|||||||
'skip_download': True,
|
'skip_download': True,
|
||||||
},
|
},
|
||||||
}, {
|
}, {
|
||||||
# PARTIAL_ADULT
|
# adult content
|
||||||
'url': 'http://vod.afreecatv.com/PLAYER/STATION/32028439',
|
'url': 'https://vod.afreecatv.com/player/97267690',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': '20180327_27901457_202289533_1',
|
'id': '20180327_27901457_202289533_1',
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
@ -161,16 +108,25 @@ class AfreecaTVIE(InfoExtractor):
|
|||||||
'params': {
|
'params': {
|
||||||
'skip_download': True,
|
'skip_download': True,
|
||||||
},
|
},
|
||||||
'expected_warnings': ['adult content'],
|
'skip': 'The VOD does not exist',
|
||||||
}, {
|
}, {
|
||||||
'url': 'http://www.afreecatv.com/player/Player.swf?szType=szBjId=djleegoon&nStationNo=11273158&nBbsNo=13161095&nTitleNo=36327652',
|
'url': 'http://www.afreecatv.com/player/Player.swf?szType=szBjId=djleegoon&nStationNo=11273158&nBbsNo=13161095&nTitleNo=36327652',
|
||||||
'only_matching': True,
|
'only_matching': True,
|
||||||
}, {
|
}, {
|
||||||
'url': 'http://vod.afreecatv.com/PLAYER/STATION/15055030',
|
'url': 'https://vod.afreecatv.com/player/96753363',
|
||||||
'only_matching': True,
|
'info_dict': {
|
||||||
}, {
|
'id': '20230108_9FF5BEE1_244432674_1',
|
||||||
'url': 'http://vod.afreecatv.com/player/15055030',
|
'ext': 'mp4',
|
||||||
'only_matching': True,
|
'uploader_id': 'rlantnghks',
|
||||||
|
'uploader': '페이즈으',
|
||||||
|
'duration': 10840,
|
||||||
|
'thumbnail': 'http://videoimg.afreecatv.com/php/SnapshotLoad.php?rowKey=20230108_9FF5BEE1_244432674_1_r',
|
||||||
|
'upload_date': '20230108',
|
||||||
|
'title': '젠지 페이즈',
|
||||||
|
},
|
||||||
|
'params': {
|
||||||
|
'skip_download': True,
|
||||||
|
},
|
||||||
}]
|
}]
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
@ -223,26 +179,21 @@ def _perform_login(self, username, password):
|
|||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
video_id = self._match_id(url)
|
video_id = self._match_id(url)
|
||||||
|
|
||||||
webpage = self._download_webpage(url, video_id)
|
|
||||||
|
|
||||||
if re.search(r'alert\(["\']This video has been deleted', webpage):
|
|
||||||
raise ExtractorError(
|
|
||||||
'Video %s has been deleted' % video_id, expected=True)
|
|
||||||
|
|
||||||
station_id = self._search_regex(
|
|
||||||
r'nStationNo\s*=\s*(\d+)', webpage, 'station')
|
|
||||||
bbs_id = self._search_regex(
|
|
||||||
r'nBbsNo\s*=\s*(\d+)', webpage, 'bbs')
|
|
||||||
video_id = self._search_regex(
|
|
||||||
r'nTitleNo\s*=\s*(\d+)', webpage, 'title', default=video_id)
|
|
||||||
|
|
||||||
partial_view = False
|
partial_view = False
|
||||||
adult_view = False
|
adult_view = False
|
||||||
for _ in range(2):
|
for _ in range(2):
|
||||||
|
data = self._download_json(
|
||||||
|
'https://api.m.afreecatv.com/station/video/a/view',
|
||||||
|
video_id, headers={'Referer': url}, data=urlencode_postdata({
|
||||||
|
'nTitleNo': video_id,
|
||||||
|
'nApiLevel': 10,
|
||||||
|
}))['data']
|
||||||
|
if traverse_obj(data, ('code', {int})) == -6221:
|
||||||
|
raise ExtractorError('The VOD does not exist', expected=True)
|
||||||
query = {
|
query = {
|
||||||
'nTitleNo': video_id,
|
'nTitleNo': video_id,
|
||||||
'nStationNo': station_id,
|
'nStationNo': data['station_no'],
|
||||||
'nBbsNo': bbs_id,
|
'nBbsNo': data['bbs_no'],
|
||||||
}
|
}
|
||||||
if partial_view:
|
if partial_view:
|
||||||
query['partialView'] = 'SKIP_ADULT'
|
query['partialView'] = 'SKIP_ADULT'
|
||||||
|
@ -191,7 +191,7 @@ def _real_extract(self, url):
|
|||||||
class AmazonMiniTVSeasonIE(AmazonMiniTVBaseIE):
|
class AmazonMiniTVSeasonIE(AmazonMiniTVBaseIE):
|
||||||
IE_NAME = 'amazonminitv:season'
|
IE_NAME = 'amazonminitv:season'
|
||||||
_VALID_URL = r'amazonminitv:season:(?:amzn1\.dv\.gti\.)?(?P<id>[a-f0-9-]+)'
|
_VALID_URL = r'amazonminitv:season:(?:amzn1\.dv\.gti\.)?(?P<id>[a-f0-9-]+)'
|
||||||
IE_DESC = 'Amazon MiniTV Series, "minitv:season:" prefix'
|
IE_DESC = 'Amazon MiniTV Season, "minitv:season:" prefix'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'amazonminitv:season:amzn1.dv.gti.0aa996eb-6a1b-4886-a342-387fbd2f1db0',
|
'url': 'amazonminitv:season:amzn1.dv.gti.0aa996eb-6a1b-4886-a342-387fbd2f1db0',
|
||||||
'playlist_mincount': 6,
|
'playlist_mincount': 6,
|
||||||
@ -250,6 +250,7 @@ def _real_extract(self, url):
|
|||||||
class AmazonMiniTVSeriesIE(AmazonMiniTVBaseIE):
|
class AmazonMiniTVSeriesIE(AmazonMiniTVBaseIE):
|
||||||
IE_NAME = 'amazonminitv:series'
|
IE_NAME = 'amazonminitv:series'
|
||||||
_VALID_URL = r'amazonminitv:series:(?:amzn1\.dv\.gti\.)?(?P<id>[a-f0-9-]+)'
|
_VALID_URL = r'amazonminitv:series:(?:amzn1\.dv\.gti\.)?(?P<id>[a-f0-9-]+)'
|
||||||
|
IE_DESC = 'Amazon MiniTV Series, "minitv:series:" prefix'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'amazonminitv:series:amzn1.dv.gti.56521d46-b040-4fd5-872e-3e70476a04b0',
|
'url': 'amazonminitv:series:amzn1.dv.gti.56521d46-b040-4fd5-872e-3e70476a04b0',
|
||||||
'playlist_mincount': 3,
|
'playlist_mincount': 3,
|
||||||
|
@ -11,7 +11,7 @@
|
|||||||
|
|
||||||
|
|
||||||
class AmericasTestKitchenIE(InfoExtractor):
|
class AmericasTestKitchenIE(InfoExtractor):
|
||||||
_VALID_URL = r'https?://(?:www\.)?americastestkitchen\.com/(?:cooks(?:country|illustrated)/)?(?P<resource_type>episode|videos)/(?P<id>\d+)'
|
_VALID_URL = r'https?://(?:www\.)?(?:americastestkitchen|cooks(?:country|illustrated))\.com/(?:cooks(?:country|illustrated)/)?(?P<resource_type>episode|videos)/(?P<id>\d+)'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'https://www.americastestkitchen.com/episode/582-weeknight-japanese-suppers',
|
'url': 'https://www.americastestkitchen.com/episode/582-weeknight-japanese-suppers',
|
||||||
'md5': 'b861c3e365ac38ad319cfd509c30577f',
|
'md5': 'b861c3e365ac38ad319cfd509c30577f',
|
||||||
@ -72,6 +72,12 @@ class AmericasTestKitchenIE(InfoExtractor):
|
|||||||
}, {
|
}, {
|
||||||
'url': 'https://www.americastestkitchen.com/cooksillustrated/videos/4478-beef-wellington',
|
'url': 'https://www.americastestkitchen.com/cooksillustrated/videos/4478-beef-wellington',
|
||||||
'only_matching': True,
|
'only_matching': True,
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.cookscountry.com/episode/564-when-only-chocolate-will-do',
|
||||||
|
'only_matching': True,
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.cooksillustrated.com/videos/4478-beef-wellington',
|
||||||
|
'only_matching': True,
|
||||||
}]
|
}]
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
@ -100,7 +106,7 @@ def _real_extract(self, url):
|
|||||||
|
|
||||||
|
|
||||||
class AmericasTestKitchenSeasonIE(InfoExtractor):
|
class AmericasTestKitchenSeasonIE(InfoExtractor):
|
||||||
_VALID_URL = r'https?://(?:www\.)?americastestkitchen\.com(?P<show>/cookscountry)?/episodes/browse/season_(?P<id>\d+)'
|
_VALID_URL = r'https?://(?:www\.)?(?P<show>americastestkitchen|(?P<cooks>cooks(?:country|illustrated)))\.com(?:(?:/(?P<show2>cooks(?:country|illustrated)))?(?:/?$|(?<!ated)(?<!ated\.com)/episodes/browse/season_(?P<season>\d+)))'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
# ATK Season
|
# ATK Season
|
||||||
'url': 'https://www.americastestkitchen.com/episodes/browse/season_1',
|
'url': 'https://www.americastestkitchen.com/episodes/browse/season_1',
|
||||||
@ -117,29 +123,73 @@ class AmericasTestKitchenSeasonIE(InfoExtractor):
|
|||||||
'title': 'Season 12',
|
'title': 'Season 12',
|
||||||
},
|
},
|
||||||
'playlist_count': 13,
|
'playlist_count': 13,
|
||||||
|
}, {
|
||||||
|
# America's Test Kitchen Series
|
||||||
|
'url': 'https://www.americastestkitchen.com/',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'americastestkitchen',
|
||||||
|
'title': 'America\'s Test Kitchen',
|
||||||
|
},
|
||||||
|
'playlist_count': 558,
|
||||||
|
}, {
|
||||||
|
# Cooks Country Series
|
||||||
|
'url': 'https://www.americastestkitchen.com/cookscountry',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'cookscountry',
|
||||||
|
'title': 'Cook\'s Country',
|
||||||
|
},
|
||||||
|
'playlist_count': 199,
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.americastestkitchen.com/cookscountry/',
|
||||||
|
'only_matching': True,
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.cookscountry.com/episodes/browse/season_12',
|
||||||
|
'only_matching': True,
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.cookscountry.com',
|
||||||
|
'only_matching': True,
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.americastestkitchen.com/cooksillustrated/',
|
||||||
|
'only_matching': True,
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.cooksillustrated.com',
|
||||||
|
'only_matching': True,
|
||||||
}]
|
}]
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
show_path, season_number = self._match_valid_url(url).group('show', 'id')
|
season_number, show1, show = self._match_valid_url(url).group('season', 'show', 'show2')
|
||||||
season_number = int(season_number)
|
show_path = ('/' + show) if show else ''
|
||||||
|
show = show or show1
|
||||||
|
season_number = int_or_none(season_number)
|
||||||
|
|
||||||
slug = 'cco' if show_path == '/cookscountry' else 'atk'
|
slug, title = {
|
||||||
|
'americastestkitchen': ('atk', 'America\'s Test Kitchen'),
|
||||||
|
'cookscountry': ('cco', 'Cook\'s Country'),
|
||||||
|
'cooksillustrated': ('cio', 'Cook\'s Illustrated'),
|
||||||
|
}[show]
|
||||||
|
|
||||||
season = 'Season %d' % season_number
|
facet_filters = [
|
||||||
|
'search_document_klass:episode',
|
||||||
|
'search_show_slug:' + slug,
|
||||||
|
]
|
||||||
|
|
||||||
|
if season_number:
|
||||||
|
playlist_id = 'season_%d' % season_number
|
||||||
|
playlist_title = 'Season %d' % season_number
|
||||||
|
facet_filters.append('search_season_list:' + playlist_title)
|
||||||
|
else:
|
||||||
|
playlist_id = show
|
||||||
|
playlist_title = title
|
||||||
|
|
||||||
season_search = self._download_json(
|
season_search = self._download_json(
|
||||||
'https://y1fnzxui30-dsn.algolia.net/1/indexes/everest_search_%s_season_desc_production' % slug,
|
'https://y1fnzxui30-dsn.algolia.net/1/indexes/everest_search_%s_season_desc_production' % slug,
|
||||||
season, headers={
|
playlist_id, headers={
|
||||||
'Origin': 'https://www.americastestkitchen.com',
|
'Origin': 'https://www.americastestkitchen.com',
|
||||||
'X-Algolia-API-Key': '8d504d0099ed27c1b73708d22871d805',
|
'X-Algolia-API-Key': '8d504d0099ed27c1b73708d22871d805',
|
||||||
'X-Algolia-Application-Id': 'Y1FNZXUI30',
|
'X-Algolia-Application-Id': 'Y1FNZXUI30',
|
||||||
}, query={
|
}, query={
|
||||||
'facetFilters': json.dumps([
|
'facetFilters': json.dumps(facet_filters),
|
||||||
'search_season_list:' + season,
|
'attributesToRetrieve': 'description,search_%s_episode_number,search_document_date,search_url,title,search_atk_episode_season' % slug,
|
||||||
'search_document_klass:episode',
|
|
||||||
'search_show_slug:' + slug,
|
|
||||||
]),
|
|
||||||
'attributesToRetrieve': 'description,search_%s_episode_number,search_document_date,search_url,title' % slug,
|
|
||||||
'attributesToHighlight': '',
|
'attributesToHighlight': '',
|
||||||
'hitsPerPage': 1000,
|
'hitsPerPage': 1000,
|
||||||
})
|
})
|
||||||
@ -162,4 +212,4 @@ def entries():
|
|||||||
}
|
}
|
||||||
|
|
||||||
return self.playlist_result(
|
return self.playlist_result(
|
||||||
entries(), 'season_%d' % season_number, season)
|
entries(), playlist_id, playlist_title)
|
||||||
|
@ -5,6 +5,7 @@
|
|||||||
int_or_none,
|
int_or_none,
|
||||||
mimetype2ext,
|
mimetype2ext,
|
||||||
parse_iso8601,
|
parse_iso8601,
|
||||||
|
strip_jsonp,
|
||||||
unified_timestamp,
|
unified_timestamp,
|
||||||
url_or_none,
|
url_or_none,
|
||||||
)
|
)
|
||||||
@ -15,7 +16,7 @@ class AMPIE(InfoExtractor): # XXX: Conventionally, base classes should end with
|
|||||||
def _extract_feed_info(self, url):
|
def _extract_feed_info(self, url):
|
||||||
feed = self._download_json(
|
feed = self._download_json(
|
||||||
url, None, 'Downloading Akamai AMP feed',
|
url, None, 'Downloading Akamai AMP feed',
|
||||||
'Unable to download Akamai AMP feed')
|
'Unable to download Akamai AMP feed', transform_source=strip_jsonp)
|
||||||
item = feed.get('channel', {}).get('item')
|
item = feed.get('channel', {}).get('item')
|
||||||
if not item:
|
if not item:
|
||||||
raise ExtractorError('%s said: %s' % (self.IE_NAME, feed['error']))
|
raise ExtractorError('%s said: %s' % (self.IE_NAME, feed['error']))
|
||||||
@ -73,8 +74,10 @@ def get_media_node(name, default=None):
|
|||||||
media_url + '?hdcore=3.4.0&plugin=aasp-3.4.0.132.124',
|
media_url + '?hdcore=3.4.0&plugin=aasp-3.4.0.132.124',
|
||||||
video_id, f4m_id='hds', fatal=False))
|
video_id, f4m_id='hds', fatal=False))
|
||||||
elif ext == 'm3u8':
|
elif ext == 'm3u8':
|
||||||
formats.extend(self._extract_m3u8_formats(
|
fmts, subs = self._extract_m3u8_formats_and_subtitles(
|
||||||
media_url, video_id, 'mp4', m3u8_id='hls', fatal=False))
|
media_url, video_id, 'mp4', m3u8_id='hls', fatal=False)
|
||||||
|
formats.extend(fmts)
|
||||||
|
self._merge_subtitles(subs, target=subtitles)
|
||||||
else:
|
else:
|
||||||
formats.append({
|
formats.append({
|
||||||
'format_id': media_data.get('media-category', {}).get('@attributes', {}).get('label'),
|
'format_id': media_data.get('media-category', {}).get('@attributes', {}).get('label'),
|
||||||
|
98
yt_dlp/extractor/anchorfm.py
Normal file
98
yt_dlp/extractor/anchorfm.py
Normal file
@ -0,0 +1,98 @@
|
|||||||
|
from .common import InfoExtractor
|
||||||
|
from ..utils import (
|
||||||
|
clean_html,
|
||||||
|
float_or_none,
|
||||||
|
int_or_none,
|
||||||
|
str_or_none,
|
||||||
|
traverse_obj,
|
||||||
|
unified_timestamp
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class AnchorFMEpisodeIE(InfoExtractor):
|
||||||
|
_VALID_URL = r'https?://anchor\.fm/(?P<channel_name>\w+)/(?:embed/)?episodes/[\w-]+-(?P<episode_id>\w+)'
|
||||||
|
_EMBED_REGEX = [rf'<iframe[^>]+\bsrc=[\'"](?P<url>{_VALID_URL})']
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://anchor.fm/lovelyti/episodes/Chrisean-Rock-takes-to-twitter-to-announce-shes-pregnant--Blueface-denies-he-is-the-father-e1tpt3d',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'e1tpt3d',
|
||||||
|
'ext': 'mp3',
|
||||||
|
'title': ' Chrisean Rock takes to twitter to announce she\'s pregnant, Blueface denies he is the father!',
|
||||||
|
'description': 'md5:207d167de3e28ceb4ddc1ebf5a30044c',
|
||||||
|
'thumbnail': 'https://s3-us-west-2.amazonaws.com/anchor-generated-image-bank/production/podcast_uploaded_nologo/1034827/1034827-1658438968460-5f3bfdf3601e8.jpg',
|
||||||
|
'duration': 624.718,
|
||||||
|
'uploader': 'Lovelyti ',
|
||||||
|
'uploader_id': '991541',
|
||||||
|
'channel': 'lovelyti',
|
||||||
|
'modified_date': '20230121',
|
||||||
|
'modified_timestamp': 1674285178,
|
||||||
|
'release_date': '20230121',
|
||||||
|
'release_timestamp': 1674285179,
|
||||||
|
'episode_id': 'e1tpt3d',
|
||||||
|
}
|
||||||
|
}, {
|
||||||
|
# embed url
|
||||||
|
'url': 'https://anchor.fm/apakatatempo/embed/episodes/S2E75-Perang-Bintang-di-Balik-Kasus-Ferdy-Sambo-dan-Ismail-Bolong-e1shjqd',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'e1shjqd',
|
||||||
|
'ext': 'mp3',
|
||||||
|
'title': 'S2E75 Perang Bintang di Balik Kasus Ferdy Sambo dan Ismail Bolong',
|
||||||
|
'description': 'md5:9e95ad9293bf00178bf8d33e9cb92c41',
|
||||||
|
'duration': 1042.008,
|
||||||
|
'thumbnail': 'https://s3-us-west-2.amazonaws.com/anchor-generated-image-bank/production/podcast_uploaded_episode400/2627805/2627805-1671590688729-4db3882ac9e4b.jpg',
|
||||||
|
'release_date': '20221221',
|
||||||
|
'release_timestamp': 1671595916,
|
||||||
|
'modified_date': '20221221',
|
||||||
|
'modified_timestamp': 1671590834,
|
||||||
|
'channel': 'apakatatempo',
|
||||||
|
'uploader': 'Podcast Tempo',
|
||||||
|
'uploader_id': '2585461',
|
||||||
|
'season': 'Season 2',
|
||||||
|
'season_number': 2,
|
||||||
|
'episode_id': 'e1shjqd',
|
||||||
|
}
|
||||||
|
}]
|
||||||
|
|
||||||
|
_WEBPAGE_TESTS = [{
|
||||||
|
'url': 'https://podcast.tempo.co/podcast/192/perang-bintang-di-balik-kasus-ferdy-sambo-dan-ismail-bolong',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'e1shjqd',
|
||||||
|
'ext': 'mp3',
|
||||||
|
'release_date': '20221221',
|
||||||
|
'duration': 1042.008,
|
||||||
|
'season': 'Season 2',
|
||||||
|
'modified_timestamp': 1671590834,
|
||||||
|
'uploader_id': '2585461',
|
||||||
|
'modified_date': '20221221',
|
||||||
|
'description': 'md5:9e95ad9293bf00178bf8d33e9cb92c41',
|
||||||
|
'season_number': 2,
|
||||||
|
'title': 'S2E75 Perang Bintang di Balik Kasus Ferdy Sambo dan Ismail Bolong',
|
||||||
|
'release_timestamp': 1671595916,
|
||||||
|
'episode_id': 'e1shjqd',
|
||||||
|
'thumbnail': 'https://s3-us-west-2.amazonaws.com/anchor-generated-image-bank/production/podcast_uploaded_episode400/2627805/2627805-1671590688729-4db3882ac9e4b.jpg',
|
||||||
|
'uploader': 'Podcast Tempo',
|
||||||
|
'channel': 'apakatatempo',
|
||||||
|
}
|
||||||
|
}]
|
||||||
|
|
||||||
|
def _real_extract(self, url):
|
||||||
|
channel_name, episode_id = self._match_valid_url(url).group('channel_name', 'episode_id')
|
||||||
|
api_data = self._download_json(f'https://anchor.fm/api/v3/episodes/{episode_id}', episode_id)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'id': episode_id,
|
||||||
|
'title': traverse_obj(api_data, ('episode', 'title')),
|
||||||
|
'url': traverse_obj(api_data, ('episode', 'episodeEnclosureUrl'), ('episodeAudios', 0, 'url')),
|
||||||
|
'ext': 'mp3',
|
||||||
|
'vcodec': 'none',
|
||||||
|
'thumbnail': traverse_obj(api_data, ('episode', 'episodeImage')),
|
||||||
|
'description': clean_html(traverse_obj(api_data, ('episode', ('description', 'descriptionPreview')), get_all=False)),
|
||||||
|
'duration': float_or_none(traverse_obj(api_data, ('episode', 'duration')), 1000),
|
||||||
|
'modified_timestamp': unified_timestamp(traverse_obj(api_data, ('episode', 'modified'))),
|
||||||
|
'release_timestamp': int_or_none(traverse_obj(api_data, ('episode', 'publishOnUnixTimestamp'))),
|
||||||
|
'episode_id': episode_id,
|
||||||
|
'uploader': traverse_obj(api_data, ('creator', 'name')),
|
||||||
|
'uploader_id': str_or_none(traverse_obj(api_data, ('creator', 'userId'))),
|
||||||
|
'season_number': int_or_none(traverse_obj(api_data, ('episode', 'podcastSeasonNumber'))),
|
||||||
|
'channel': channel_name or traverse_obj(api_data, ('creator', 'vanitySlug')),
|
||||||
|
}
|
@ -336,7 +336,7 @@ def _get_anvato_videos(self, access_key, video_id, token):
|
|||||||
elif media_format == 'm3u8-variant' or ext == 'm3u8':
|
elif media_format == 'm3u8-variant' or ext == 'm3u8':
|
||||||
# For some videos the initial m3u8 URL returns JSON instead
|
# For some videos the initial m3u8 URL returns JSON instead
|
||||||
manifest_json = self._download_json(
|
manifest_json = self._download_json(
|
||||||
video_url, video_id, note='Downloading manifest JSON', errnote=False)
|
video_url, video_id, note='Downloading manifest JSON', fatal=False)
|
||||||
if manifest_json:
|
if manifest_json:
|
||||||
video_url = manifest_json.get('master_m3u8')
|
video_url = manifest_json.get('master_m3u8')
|
||||||
if not video_url:
|
if not video_url:
|
||||||
@ -392,14 +392,6 @@ def _extract_from_webpage(cls, url, webpage):
|
|||||||
url = smuggle_url(url, {'token': anvplayer_data['token']})
|
url = smuggle_url(url, {'token': anvplayer_data['token']})
|
||||||
yield cls.url_result(url, AnvatoIE, video_id)
|
yield cls.url_result(url, AnvatoIE, video_id)
|
||||||
|
|
||||||
def _extract_anvato_videos(self, webpage, video_id):
|
|
||||||
anvplayer_data = self._parse_json(
|
|
||||||
self._html_search_regex(
|
|
||||||
self._ANVP_RE, webpage, 'Anvato player data', group='anvp'),
|
|
||||||
video_id)
|
|
||||||
return self._get_anvato_videos(
|
|
||||||
anvplayer_data['accessKey'], anvplayer_data['video'], 'default') # cbslocal token = 'default'
|
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
url, smuggled_data = unsmuggle_url(url, {})
|
url, smuggled_data = unsmuggle_url(url, {})
|
||||||
self._initialize_geo_bypass({
|
self._initialize_geo_bypass({
|
||||||
|
@ -1,8 +1,10 @@
|
|||||||
import json
|
import json
|
||||||
import re
|
import re
|
||||||
|
import urllib.error
|
||||||
import urllib.parse
|
import urllib.parse
|
||||||
|
|
||||||
from .common import InfoExtractor
|
from .common import InfoExtractor
|
||||||
|
from .naver import NaverBaseIE
|
||||||
from .youtube import YoutubeBaseInfoExtractor, YoutubeIE
|
from .youtube import YoutubeBaseInfoExtractor, YoutubeIE
|
||||||
from ..compat import compat_HTTPError, compat_urllib_parse_unquote
|
from ..compat import compat_HTTPError, compat_urllib_parse_unquote
|
||||||
from ..utils import (
|
from ..utils import (
|
||||||
@ -945,3 +947,237 @@ def _real_extract(self, url):
|
|||||||
if not info.get('title'):
|
if not info.get('title'):
|
||||||
info['title'] = video_id
|
info['title'] = video_id
|
||||||
return info
|
return info
|
||||||
|
|
||||||
|
|
||||||
|
class VLiveWebArchiveIE(InfoExtractor):
|
||||||
|
IE_NAME = 'web.archive:vlive'
|
||||||
|
IE_DESC = 'web.archive.org saved vlive videos'
|
||||||
|
_VALID_URL = r'''(?x)
|
||||||
|
(?:https?://)?web\.archive\.org/
|
||||||
|
(?:web/)?(?:(?P<date>[0-9]{14})?[0-9A-Za-z_*]*/)? # /web and the version index is optional
|
||||||
|
(?:https?(?::|%3[Aa])//)?(?:
|
||||||
|
(?:(?:www|m)\.)?vlive\.tv(?::(?:80|443))?/(?:video|embed)/(?P<id>[0-9]+) # VLive URL
|
||||||
|
)
|
||||||
|
'''
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://web.archive.org/web/20221221144331/http://www.vlive.tv/video/1326',
|
||||||
|
'md5': 'cc7314812855ce56de70a06a27314983',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '1326',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': "Girl's Day's Broadcast",
|
||||||
|
'creator': "Girl's Day",
|
||||||
|
'view_count': int,
|
||||||
|
'uploader_id': 'muploader_a',
|
||||||
|
'uploader_url': None,
|
||||||
|
'uploader': None,
|
||||||
|
'upload_date': '20150817',
|
||||||
|
'thumbnail': r're:^https?://.*\.(?:jpg|png)$',
|
||||||
|
'timestamp': 1439816449,
|
||||||
|
'like_count': int,
|
||||||
|
'channel': 'Girl\'s Day',
|
||||||
|
'channel_id': 'FDF27',
|
||||||
|
'comment_count': int,
|
||||||
|
'release_timestamp': 1439818140,
|
||||||
|
'release_date': '20150817',
|
||||||
|
'duration': 1014,
|
||||||
|
},
|
||||||
|
'params': {
|
||||||
|
'skip_download': True,
|
||||||
|
},
|
||||||
|
}, {
|
||||||
|
'url': 'https://web.archive.org/web/20221221182103/http://www.vlive.tv/video/16937',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '16937',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': '첸백시 걍방',
|
||||||
|
'creator': 'EXO',
|
||||||
|
'view_count': int,
|
||||||
|
'subtitles': 'mincount:12',
|
||||||
|
'uploader_id': 'muploader_j',
|
||||||
|
'uploader_url': 'http://vlive.tv',
|
||||||
|
'uploader': None,
|
||||||
|
'upload_date': '20161112',
|
||||||
|
'thumbnail': r're:^https?://.*\.(?:jpg|png)$',
|
||||||
|
'timestamp': 1478923074,
|
||||||
|
'like_count': int,
|
||||||
|
'channel': 'EXO',
|
||||||
|
'channel_id': 'F94BD',
|
||||||
|
'comment_count': int,
|
||||||
|
'release_timestamp': 1478924280,
|
||||||
|
'release_date': '20161112',
|
||||||
|
'duration': 906,
|
||||||
|
},
|
||||||
|
'params': {
|
||||||
|
'skip_download': True,
|
||||||
|
},
|
||||||
|
}, {
|
||||||
|
'url': 'https://web.archive.org/web/20221127190050/http://www.vlive.tv/video/101870',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '101870',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': '[ⓓ xV] “레벨이들 매력에 반해? 안 반해?” 움직이는 HD 포토 (레드벨벳:Red Velvet)',
|
||||||
|
'creator': 'Dispatch',
|
||||||
|
'view_count': int,
|
||||||
|
'subtitles': 'mincount:6',
|
||||||
|
'uploader_id': 'V__FRA08071',
|
||||||
|
'uploader_url': 'http://vlive.tv',
|
||||||
|
'uploader': None,
|
||||||
|
'upload_date': '20181130',
|
||||||
|
'thumbnail': r're:^https?://.*\.(?:jpg|png)$',
|
||||||
|
'timestamp': 1543601327,
|
||||||
|
'like_count': int,
|
||||||
|
'channel': 'Dispatch',
|
||||||
|
'channel_id': 'C796F3',
|
||||||
|
'comment_count': int,
|
||||||
|
'release_timestamp': 1543601040,
|
||||||
|
'release_date': '20181130',
|
||||||
|
'duration': 279,
|
||||||
|
},
|
||||||
|
'params': {
|
||||||
|
'skip_download': True,
|
||||||
|
},
|
||||||
|
}]
|
||||||
|
|
||||||
|
# The wayback machine has special timestamp and "mode" values:
|
||||||
|
# timestamp:
|
||||||
|
# 1 = the first capture
|
||||||
|
# 2 = the last capture
|
||||||
|
# mode:
|
||||||
|
# id_ = Identity - perform no alterations of the original resource, return it as it was archived.
|
||||||
|
_WAYBACK_BASE_URL = 'https://web.archive.org/web/2id_/'
|
||||||
|
|
||||||
|
def _download_archived_page(self, url, video_id, *, timestamp='2', **kwargs):
|
||||||
|
for retry in self.RetryManager():
|
||||||
|
try:
|
||||||
|
return self._download_webpage(f'https://web.archive.org/web/{timestamp}id_/{url}', video_id, **kwargs)
|
||||||
|
except ExtractorError as e:
|
||||||
|
if isinstance(e.cause, urllib.error.HTTPError) and e.cause.code == 404:
|
||||||
|
raise ExtractorError('Page was not archived', expected=True)
|
||||||
|
retry.error = e
|
||||||
|
continue
|
||||||
|
|
||||||
|
def _download_archived_json(self, url, video_id, **kwargs):
|
||||||
|
page = self._download_archived_page(url, video_id, **kwargs)
|
||||||
|
if not page:
|
||||||
|
raise ExtractorError('Page was not archived', expected=True)
|
||||||
|
else:
|
||||||
|
return self._parse_json(page, video_id)
|
||||||
|
|
||||||
|
def _extract_formats_from_m3u8(self, m3u8_url, params, video_id):
|
||||||
|
m3u8_doc = self._download_archived_page(m3u8_url, video_id, note='Downloading m3u8', query=params, fatal=False)
|
||||||
|
if not m3u8_doc:
|
||||||
|
return
|
||||||
|
|
||||||
|
# M3U8 document should be changed to archive domain
|
||||||
|
m3u8_doc = m3u8_doc.splitlines()
|
||||||
|
url_base = m3u8_url.rsplit('/', 1)[0]
|
||||||
|
first_segment = None
|
||||||
|
for i, line in enumerate(m3u8_doc):
|
||||||
|
if not line.startswith('#'):
|
||||||
|
m3u8_doc[i] = f'{self._WAYBACK_BASE_URL}{url_base}/{line}?{urllib.parse.urlencode(params)}'
|
||||||
|
first_segment = first_segment or m3u8_doc[i]
|
||||||
|
|
||||||
|
# Segments may not have been archived. See https://web.archive.org/web/20221127190050/http://www.vlive.tv/video/101870
|
||||||
|
urlh = self._request_webpage(HEADRequest(first_segment), video_id, errnote=False,
|
||||||
|
fatal=False, note='Check first segment availablity')
|
||||||
|
if urlh:
|
||||||
|
formats, subtitles = self._parse_m3u8_formats_and_subtitles('\n'.join(m3u8_doc), ext='mp4', video_id=video_id)
|
||||||
|
if subtitles:
|
||||||
|
self._report_ignoring_subs('m3u8')
|
||||||
|
return formats
|
||||||
|
|
||||||
|
# Closely follows the logic of the ArchiveTeam grab script
|
||||||
|
# See: https://github.com/ArchiveTeam/vlive-grab/blob/master/vlive.lua
|
||||||
|
def _real_extract(self, url):
|
||||||
|
video_id, url_date = self._match_valid_url(url).group('id', 'date')
|
||||||
|
|
||||||
|
webpage = self._download_archived_page(f'https://www.vlive.tv/video/{video_id}', video_id, timestamp=url_date)
|
||||||
|
|
||||||
|
player_info = self._search_json(r'__PRELOADED_STATE__\s*=', webpage, 'player info', video_id)
|
||||||
|
user_country = traverse_obj(player_info, ('common', 'userCountry'))
|
||||||
|
|
||||||
|
main_script_url = self._search_regex(r'<script\s+src="([^"]+/js/main\.[^"]+\.js)"', webpage, 'main script url')
|
||||||
|
main_script = self._download_archived_page(main_script_url, video_id, note='Downloading main script')
|
||||||
|
app_id = self._search_regex(r'appId\s*=\s*"([^"]+)"', main_script, 'app id')
|
||||||
|
|
||||||
|
inkey = self._download_archived_json(
|
||||||
|
f'https://www.vlive.tv/globalv-web/vam-web/video/v1.0/vod/{video_id}/inkey', video_id, note='Fetching inkey', query={
|
||||||
|
'appId': app_id,
|
||||||
|
'platformType': 'PC',
|
||||||
|
'gcc': user_country,
|
||||||
|
'locale': 'en_US',
|
||||||
|
}, fatal=False)
|
||||||
|
|
||||||
|
vod_id = traverse_obj(player_info, ('postDetail', 'post', 'officialVideo', 'vodId'))
|
||||||
|
|
||||||
|
vod_data = self._download_archived_json(
|
||||||
|
f'https://apis.naver.com/rmcnmv/rmcnmv/vod/play/v2.0/{vod_id}', video_id, note='Fetching vod data', query={
|
||||||
|
'key': inkey.get('inkey'),
|
||||||
|
'pid': 'rmcPlayer_16692457559726800', # partially unix time and partially random. Fixed value used by archiveteam project
|
||||||
|
'sid': '2024',
|
||||||
|
'ver': '2.0',
|
||||||
|
'devt': 'html5_pc',
|
||||||
|
'doct': 'json',
|
||||||
|
'ptc': 'https',
|
||||||
|
'sptc': 'https',
|
||||||
|
'cpt': 'vtt',
|
||||||
|
'ctls': '%7B%22visible%22%3A%7B%22fullscreen%22%3Atrue%2C%22logo%22%3Afalse%2C%22playbackRate%22%3Afalse%2C%22scrap%22%3Afalse%2C%22playCount%22%3Atrue%2C%22commentCount%22%3Atrue%2C%22title%22%3Atrue%2C%22writer%22%3Atrue%2C%22expand%22%3Afalse%2C%22subtitles%22%3Atrue%2C%22thumbnails%22%3Atrue%2C%22quality%22%3Atrue%2C%22setting%22%3Atrue%2C%22script%22%3Afalse%2C%22logoDimmed%22%3Atrue%2C%22badge%22%3Atrue%2C%22seekingTime%22%3Atrue%2C%22muted%22%3Atrue%2C%22muteButton%22%3Afalse%2C%22viewerNotice%22%3Afalse%2C%22linkCount%22%3Afalse%2C%22createTime%22%3Afalse%2C%22thumbnail%22%3Atrue%7D%2C%22clicked%22%3A%7B%22expand%22%3Afalse%2C%22subtitles%22%3Afalse%7D%7D',
|
||||||
|
'pv': '4.26.9',
|
||||||
|
'dr': '1920x1080',
|
||||||
|
'cpl': 'en_US',
|
||||||
|
'lc': 'en_US',
|
||||||
|
'adi': '%5B%7B%22type%22%3A%22pre%22%2C%22exposure%22%3Afalse%2C%22replayExposure%22%3Afalse%7D%5D',
|
||||||
|
'adu': '%2F',
|
||||||
|
'videoId': vod_id,
|
||||||
|
'cc': user_country,
|
||||||
|
})
|
||||||
|
|
||||||
|
formats = []
|
||||||
|
|
||||||
|
streams = traverse_obj(vod_data, ('streams', ...))
|
||||||
|
if len(streams) > 1:
|
||||||
|
self.report_warning('Multiple streams found. Only the first stream will be downloaded.')
|
||||||
|
stream = streams[0]
|
||||||
|
|
||||||
|
max_stream = max(
|
||||||
|
stream.get('videos') or [],
|
||||||
|
key=lambda v: traverse_obj(v, ('bitrate', 'video'), default=0), default=None)
|
||||||
|
if max_stream is not None:
|
||||||
|
params = {arg.get('name'): arg.get('value') for arg in stream.get('keys', []) if arg.get('type') == 'param'}
|
||||||
|
formats = self._extract_formats_from_m3u8(max_stream.get('source'), params, video_id) or []
|
||||||
|
|
||||||
|
# For parts of the project MP4 files were archived
|
||||||
|
max_video = max(
|
||||||
|
traverse_obj(vod_data, ('videos', 'list', ...)),
|
||||||
|
key=lambda v: traverse_obj(v, ('bitrate', 'video'), default=0), default=None)
|
||||||
|
if max_video is not None:
|
||||||
|
video_url = self._WAYBACK_BASE_URL + max_video.get('source')
|
||||||
|
urlh = self._request_webpage(HEADRequest(video_url), video_id, errnote=False,
|
||||||
|
fatal=False, note='Check video availablity')
|
||||||
|
if urlh:
|
||||||
|
formats.append({'url': video_url})
|
||||||
|
|
||||||
|
return {
|
||||||
|
'id': video_id,
|
||||||
|
'formats': formats,
|
||||||
|
**traverse_obj(player_info, ('postDetail', 'post', {
|
||||||
|
'title': ('officialVideo', 'title', {str}),
|
||||||
|
'creator': ('author', 'nickname', {str}),
|
||||||
|
'channel': ('channel', 'channelName', {str}),
|
||||||
|
'channel_id': ('channel', 'channelCode', {str}),
|
||||||
|
'duration': ('officialVideo', 'playTime', {int_or_none}),
|
||||||
|
'view_count': ('officialVideo', 'playCount', {int_or_none}),
|
||||||
|
'like_count': ('officialVideo', 'likeCount', {int_or_none}),
|
||||||
|
'comment_count': ('officialVideo', 'commentCount', {int_or_none}),
|
||||||
|
'timestamp': ('officialVideo', 'createdAt', {lambda x: int_or_none(x, scale=1000)}),
|
||||||
|
'release_timestamp': ('officialVideo', 'willStartAt', {lambda x: int_or_none(x, scale=1000)}),
|
||||||
|
})),
|
||||||
|
**traverse_obj(vod_data, ('meta', {
|
||||||
|
'uploader_id': ('user', 'id', {str}),
|
||||||
|
'uploader': ('user', 'name', {str}),
|
||||||
|
'uploader_url': ('user', 'url', {url_or_none}),
|
||||||
|
'thumbnail': ('cover', 'source', {url_or_none}),
|
||||||
|
}), expected_type=lambda x: x or None),
|
||||||
|
**NaverBaseIE.process_subtitles(vod_data, lambda x: [self._WAYBACK_BASE_URL + x]),
|
||||||
|
}
|
||||||
|
@ -13,6 +13,7 @@
|
|||||||
try_get,
|
try_get,
|
||||||
unified_strdate,
|
unified_strdate,
|
||||||
unified_timestamp,
|
unified_timestamp,
|
||||||
|
update_url,
|
||||||
update_url_query,
|
update_url_query,
|
||||||
url_or_none,
|
url_or_none,
|
||||||
xpath_text,
|
xpath_text,
|
||||||
@ -408,6 +409,23 @@ class ARDBetaMediathekIE(ARDMediathekBaseIE):
|
|||||||
(?(playlist)/(?P<season>\d+)?/?(?:[?#]|$))'''
|
(?(playlist)/(?P<season>\d+)?/?(?:[?#]|$))'''
|
||||||
|
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
|
'url': 'https://www.ardmediathek.de/video/filme-im-mdr/wolfsland-die-traurigen-schwestern/mdr-fernsehen/Y3JpZDovL21kci5kZS9iZWl0cmFnL2Ntcy8xZGY0ZGJmZS00ZWQwLTRmMGItYjhhYy0wOGQ4ZmYxNjVhZDI',
|
||||||
|
'md5': '3fd5fead7a370a819341129c8d713136',
|
||||||
|
'info_dict': {
|
||||||
|
'display_id': 'filme-im-mdr/wolfsland-die-traurigen-schwestern/mdr-fernsehen',
|
||||||
|
'id': '12172961',
|
||||||
|
'title': 'Wolfsland - Die traurigen Schwestern',
|
||||||
|
'description': r're:^Als der Polizeiobermeister Raaben',
|
||||||
|
'duration': 5241,
|
||||||
|
'thumbnail': 'https://api.ardmediathek.de/image-service/images/urn:ard:image:efa186f7b0054957',
|
||||||
|
'timestamp': 1670710500,
|
||||||
|
'upload_date': '20221210',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'age_limit': 12,
|
||||||
|
'episode': 'Wolfsland - Die traurigen Schwestern',
|
||||||
|
'series': 'Filme im MDR'
|
||||||
|
},
|
||||||
|
}, {
|
||||||
'url': 'https://www.ardmediathek.de/mdr/video/die-robuste-roswita/Y3JpZDovL21kci5kZS9iZWl0cmFnL2Ntcy84MWMxN2MzZC0wMjkxLTRmMzUtODk4ZS0wYzhlOWQxODE2NGI/',
|
'url': 'https://www.ardmediathek.de/mdr/video/die-robuste-roswita/Y3JpZDovL21kci5kZS9iZWl0cmFnL2Ntcy84MWMxN2MzZC0wMjkxLTRmMzUtODk4ZS0wYzhlOWQxODE2NGI/',
|
||||||
'md5': 'a1dc75a39c61601b980648f7c9f9f71d',
|
'md5': 'a1dc75a39c61601b980648f7c9f9f71d',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
@ -424,7 +442,7 @@ class ARDBetaMediathekIE(ARDMediathekBaseIE):
|
|||||||
'skip': 'Error',
|
'skip': 'Error',
|
||||||
}, {
|
}, {
|
||||||
'url': 'https://www.ardmediathek.de/video/tagesschau-oder-tagesschau-20-00-uhr/das-erste/Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhZ2Vzc2NoYXUvZmM4ZDUxMjgtOTE0ZC00Y2MzLTgzNzAtNDZkNGNiZWJkOTll',
|
'url': 'https://www.ardmediathek.de/video/tagesschau-oder-tagesschau-20-00-uhr/das-erste/Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhZ2Vzc2NoYXUvZmM4ZDUxMjgtOTE0ZC00Y2MzLTgzNzAtNDZkNGNiZWJkOTll',
|
||||||
'md5': 'f1837e563323b8a642a8ddeff0131f51',
|
'md5': '1e73ded21cb79bac065117e80c81dc88',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': '10049223',
|
'id': '10049223',
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
@ -432,13 +450,11 @@ class ARDBetaMediathekIE(ARDMediathekBaseIE):
|
|||||||
'timestamp': 1636398000,
|
'timestamp': 1636398000,
|
||||||
'description': 'md5:39578c7b96c9fe50afdf5674ad985e6b',
|
'description': 'md5:39578c7b96c9fe50afdf5674ad985e6b',
|
||||||
'upload_date': '20211108',
|
'upload_date': '20211108',
|
||||||
},
|
'display_id': 'tagesschau-oder-tagesschau-20-00-uhr/das-erste',
|
||||||
}, {
|
'duration': 915,
|
||||||
'url': 'https://www.ardmediathek.de/sendung/beforeigners/beforeigners/staffel-1/Y3JpZDovL2Rhc2Vyc3RlLmRlL2JlZm9yZWlnbmVycw/1',
|
'episode': 'tagesschau, 20:00 Uhr',
|
||||||
'playlist_count': 6,
|
'series': 'tagesschau',
|
||||||
'info_dict': {
|
'thumbnail': 'https://api.ardmediathek.de/image-service/images/urn:ard:image:fbb21142783b0a49',
|
||||||
'id': 'Y3JpZDovL2Rhc2Vyc3RlLmRlL2JlZm9yZWlnbmVycw',
|
|
||||||
'title': 'beforeigners/beforeigners/staffel-1',
|
|
||||||
},
|
},
|
||||||
}, {
|
}, {
|
||||||
'url': 'https://beta.ardmediathek.de/ard/video/Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhdG9ydC9mYmM4NGM1NC0xNzU4LTRmZGYtYWFhZS0wYzcyZTIxNGEyMDE',
|
'url': 'https://beta.ardmediathek.de/ard/video/Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhdG9ydC9mYmM4NGM1NC0xNzU4LTRmZGYtYWFhZS0wYzcyZTIxNGEyMDE',
|
||||||
@ -602,6 +618,9 @@ def _real_extract(self, url):
|
|||||||
show {
|
show {
|
||||||
title
|
title
|
||||||
}
|
}
|
||||||
|
image {
|
||||||
|
src
|
||||||
|
}
|
||||||
synopsis
|
synopsis
|
||||||
title
|
title
|
||||||
tracking {
|
tracking {
|
||||||
@ -640,6 +659,15 @@ def _real_extract(self, url):
|
|||||||
'description': description,
|
'description': description,
|
||||||
'timestamp': unified_timestamp(player_page.get('broadcastedOn')),
|
'timestamp': unified_timestamp(player_page.get('broadcastedOn')),
|
||||||
'series': try_get(player_page, lambda x: x['show']['title']),
|
'series': try_get(player_page, lambda x: x['show']['title']),
|
||||||
|
'thumbnail': (media_collection.get('_previewImage')
|
||||||
|
or try_get(player_page, lambda x: update_url(x['image']['src'], query=None, fragment=None))
|
||||||
|
or self.get_thumbnail_from_html(display_id, url)),
|
||||||
})
|
})
|
||||||
info.update(self._ARD_extract_episode_info(info['title']))
|
info.update(self._ARD_extract_episode_info(info['title']))
|
||||||
return info
|
return info
|
||||||
|
|
||||||
|
def get_thumbnail_from_html(self, display_id, url):
|
||||||
|
webpage = self._download_webpage(url, display_id, fatal=False) or ''
|
||||||
|
return (
|
||||||
|
self._og_search_thumbnail(webpage, default=None)
|
||||||
|
or self._html_search_meta('thumbnailUrl', webpage, default=None))
|
||||||
|
@ -5,7 +5,7 @@
|
|||||||
|
|
||||||
|
|
||||||
class BFMTVBaseIE(InfoExtractor):
|
class BFMTVBaseIE(InfoExtractor):
|
||||||
_VALID_URL_BASE = r'https?://(?:www\.)?bfmtv\.com/'
|
_VALID_URL_BASE = r'https?://(?:www\.|rmc\.)?bfmtv\.com/'
|
||||||
_VALID_URL_TMPL = _VALID_URL_BASE + r'(?:[^/]+/)*[^/?&#]+_%s[A-Z]-(?P<id>\d{12})\.html'
|
_VALID_URL_TMPL = _VALID_URL_BASE + r'(?:[^/]+/)*[^/?&#]+_%s[A-Z]-(?P<id>\d{12})\.html'
|
||||||
_VIDEO_BLOCK_REGEX = r'(<div[^>]+class="video_block"[^>]*>)'
|
_VIDEO_BLOCK_REGEX = r'(<div[^>]+class="video_block"[^>]*>)'
|
||||||
BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/%s_default/index.html?videoId=%s'
|
BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/%s_default/index.html?videoId=%s'
|
||||||
@ -31,6 +31,9 @@ class BFMTVIE(BFMTVBaseIE):
|
|||||||
'uploader_id': '876450610001',
|
'uploader_id': '876450610001',
|
||||||
'upload_date': '20201002',
|
'upload_date': '20201002',
|
||||||
'timestamp': 1601629620,
|
'timestamp': 1601629620,
|
||||||
|
'duration': 44.757,
|
||||||
|
'tags': ['bfmactu', 'politique'],
|
||||||
|
'thumbnail': 'https://cf-images.eu-west-1.prod.boltdns.net/v1/static/876450610001/5041f4c1-bc48-4af8-a256-1b8300ad8ef0/cf2f9114-e8e2-4494-82b4-ab794ea4bc7d/1920x1080/match/image.jpg',
|
||||||
},
|
},
|
||||||
}]
|
}]
|
||||||
|
|
||||||
@ -81,6 +84,20 @@ class BFMTVArticleIE(BFMTVBaseIE):
|
|||||||
}, {
|
}, {
|
||||||
'url': 'https://www.bfmtv.com/sante/covid-19-oui-le-vaccin-de-pfizer-distribue-en-france-a-bien-ete-teste-sur-des-personnes-agees_AN-202101060275.html',
|
'url': 'https://www.bfmtv.com/sante/covid-19-oui-le-vaccin-de-pfizer-distribue-en-france-a-bien-ete-teste-sur-des-personnes-agees_AN-202101060275.html',
|
||||||
'only_matching': True,
|
'only_matching': True,
|
||||||
|
}, {
|
||||||
|
'url': 'https://rmc.bfmtv.com/actualites/societe/transports/ce-n-est-plus-tout-rentable-le-bioethanol-e85-depasse-1eu-le-litre-des-automobilistes-regrettent_AV-202301100268.html',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '6318445464112',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'Le plein de bioéthanol fait de plus en plus mal à la pompe',
|
||||||
|
'description': None,
|
||||||
|
'uploader_id': '876630703001',
|
||||||
|
'upload_date': '20230110',
|
||||||
|
'timestamp': 1673341692,
|
||||||
|
'duration': 109.269,
|
||||||
|
'tags': ['rmc', 'show', 'apolline de malherbe', 'info', 'talk', 'matinale', 'radio'],
|
||||||
|
'thumbnail': 'https://cf-images.eu-west-1.prod.boltdns.net/v1/static/876630703001/5bef74b8-9d5e-4480-a21f-60c2e2480c46/96c88b74-f9db-45e1-8040-e199c5da216c/1920x1080/match/image.jpg'
|
||||||
|
}
|
||||||
}]
|
}]
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
|
@ -1,27 +1,197 @@
|
|||||||
|
from functools import partial
|
||||||
|
|
||||||
from .common import InfoExtractor
|
from .common import InfoExtractor
|
||||||
|
from ..utils import (
|
||||||
|
ExtractorError,
|
||||||
|
clean_html,
|
||||||
|
determine_ext,
|
||||||
|
format_field,
|
||||||
|
int_or_none,
|
||||||
|
js_to_json,
|
||||||
|
orderedSet,
|
||||||
|
parse_iso8601,
|
||||||
|
traverse_obj,
|
||||||
|
url_or_none,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class BibelTVIE(InfoExtractor):
|
class BibelTVBaseIE(InfoExtractor):
|
||||||
_VALID_URL = r'https?://(?:www\.)?bibeltv\.de/mediathek/videos/(?:crn/)?(?P<id>\d+)'
|
_GEO_COUNTRIES = ['AT', 'CH', 'DE']
|
||||||
_TESTS = [{
|
_GEO_BYPASS = False
|
||||||
'url': 'https://www.bibeltv.de/mediathek/videos/329703-sprachkurs-in-malaiisch',
|
|
||||||
'md5': '252f908192d611de038b8504b08bf97f',
|
API_URL = 'https://www.bibeltv.de/mediathek/api'
|
||||||
'info_dict': {
|
AUTH_TOKEN = 'j88bRXY8DsEqJ9xmTdWhrByVi5Hm'
|
||||||
'id': 'ref:329703',
|
|
||||||
'ext': 'mp4',
|
def _extract_formats_and_subtitles(self, data, crn_id, *, is_live=False):
|
||||||
'title': 'Sprachkurs in Malaiisch',
|
formats = []
|
||||||
'description': 'md5:3e9f197d29ee164714e67351cf737dfe',
|
subtitles = {}
|
||||||
'timestamp': 1608316701,
|
for media_url in traverse_obj(data, (..., 'src', {url_or_none})):
|
||||||
'uploader_id': '5840105145001',
|
media_ext = determine_ext(media_url)
|
||||||
'upload_date': '20201218',
|
if media_ext == 'm3u8':
|
||||||
|
m3u8_formats, m3u8_subs = self._extract_m3u8_formats_and_subtitles(
|
||||||
|
media_url, crn_id, live=is_live)
|
||||||
|
formats.extend(m3u8_formats)
|
||||||
|
subtitles.update(m3u8_subs)
|
||||||
|
elif media_ext == 'mpd':
|
||||||
|
mpd_formats, mpd_subs = self._extract_mpd_formats_and_subtitles(media_url, crn_id)
|
||||||
|
formats.extend(mpd_formats)
|
||||||
|
subtitles.update(mpd_subs)
|
||||||
|
elif media_ext == 'mp4':
|
||||||
|
formats.append({'url': media_url})
|
||||||
|
else:
|
||||||
|
self.report_warning(f'Unknown format {media_ext!r}')
|
||||||
|
|
||||||
|
return formats, subtitles
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _extract_base_info(data):
|
||||||
|
return {
|
||||||
|
'id': data['crn'],
|
||||||
|
**traverse_obj(data, {
|
||||||
|
'title': 'title',
|
||||||
|
'description': 'description',
|
||||||
|
'duration': ('duration', {partial(int_or_none, scale=1000)}),
|
||||||
|
'timestamp': ('schedulingStart', {parse_iso8601}),
|
||||||
|
'season_number': 'seasonNumber',
|
||||||
|
'episode_number': 'episodeNumber',
|
||||||
|
'view_count': 'viewCount',
|
||||||
|
'like_count': 'likeCount',
|
||||||
|
}),
|
||||||
|
'thumbnails': orderedSet(traverse_obj(data, ('images', ..., {
|
||||||
|
'url': ('url', {url_or_none}),
|
||||||
|
}))),
|
||||||
}
|
}
|
||||||
}, {
|
|
||||||
'url': 'https://www.bibeltv.de/mediathek/videos/crn/326374',
|
def _extract_url_info(self, data):
|
||||||
'only_matching': True,
|
return {
|
||||||
|
'_type': 'url',
|
||||||
|
'url': format_field(data, 'slug', 'https://www.bibeltv.de/mediathek/videos/%s'),
|
||||||
|
**self._extract_base_info(data),
|
||||||
|
}
|
||||||
|
|
||||||
|
def _extract_video_info(self, data):
|
||||||
|
crn_id = data['crn']
|
||||||
|
|
||||||
|
if data.get('drm'):
|
||||||
|
self.report_drm(crn_id)
|
||||||
|
|
||||||
|
json_data = self._download_json(
|
||||||
|
format_field(data, 'id', f'{self.API_URL}/video/%s'), crn_id,
|
||||||
|
headers={'Authorization': self.AUTH_TOKEN}, fatal=False,
|
||||||
|
errnote='No formats available') or {}
|
||||||
|
|
||||||
|
formats, subtitles = self._extract_formats_and_subtitles(
|
||||||
|
traverse_obj(json_data, ('video', 'videoUrls', ...)), crn_id)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'_type': 'video',
|
||||||
|
**self._extract_base_info(data),
|
||||||
|
'formats': formats,
|
||||||
|
'subtitles': subtitles,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class BibelTVVideoIE(BibelTVBaseIE):
|
||||||
|
IE_DESC = 'BibelTV single video'
|
||||||
|
_VALID_URL = r'https?://(?:www\.)?bibeltv\.de/mediathek/videos/(?P<id>\d+)[\w-]+'
|
||||||
|
IE_NAME = 'bibeltv:video'
|
||||||
|
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://www.bibeltv.de/mediathek/videos/344436-alte-wege',
|
||||||
|
'md5': 'ec1c07efe54353780512e8a4103b612e',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '344436',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'Alte Wege',
|
||||||
|
'description': 'md5:2f4eb7294c9797a47b8fd13cccca22e9',
|
||||||
|
'timestamp': 1677877071,
|
||||||
|
'duration': 150.0,
|
||||||
|
'upload_date': '20230303',
|
||||||
|
'thumbnail': r're:https://bibeltv\.imgix\.net/[\w-]+\.jpg',
|
||||||
|
'episode': 'Episode 1',
|
||||||
|
'episode_number': 1,
|
||||||
|
'view_count': int,
|
||||||
|
'like_count': int,
|
||||||
|
},
|
||||||
|
'params': {
|
||||||
|
'format': '6',
|
||||||
|
},
|
||||||
}]
|
}]
|
||||||
BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/5840105145001/default_default/index.html?videoId=ref:%s'
|
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
crn_id = self._match_id(url)
|
crn_id = self._match_id(url)
|
||||||
return self.url_result(
|
video_data = traverse_obj(
|
||||||
self.BRIGHTCOVE_URL_TEMPLATE % crn_id, 'BrightcoveNew')
|
self._search_nextjs_data(self._download_webpage(url, crn_id), crn_id),
|
||||||
|
('props', 'pageProps', 'videoPageData', 'videos', 0, {dict}))
|
||||||
|
if not video_data:
|
||||||
|
raise ExtractorError('Missing video data.')
|
||||||
|
|
||||||
|
return self._extract_video_info(video_data)
|
||||||
|
|
||||||
|
|
||||||
|
class BibelTVSeriesIE(BibelTVBaseIE):
|
||||||
|
IE_DESC = 'BibelTV series playlist'
|
||||||
|
_VALID_URL = r'https?://(?:www\.)?bibeltv\.de/mediathek/serien/(?P<id>\d+)[\w-]+'
|
||||||
|
IE_NAME = 'bibeltv:series'
|
||||||
|
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://www.bibeltv.de/mediathek/serien/333485-ein-wunder-fuer-jeden-tag',
|
||||||
|
'playlist_mincount': 400,
|
||||||
|
'info_dict': {
|
||||||
|
'id': '333485',
|
||||||
|
'title': 'Ein Wunder für jeden Tag',
|
||||||
|
'description': 'Tägliche Kurzandacht mit Déborah Rosenkranz.',
|
||||||
|
},
|
||||||
|
}]
|
||||||
|
|
||||||
|
def _real_extract(self, url):
|
||||||
|
crn_id = self._match_id(url)
|
||||||
|
webpage = self._download_webpage(url, crn_id)
|
||||||
|
nextjs_data = self._search_nextjs_data(webpage, crn_id)
|
||||||
|
series_data = traverse_obj(nextjs_data, ('props', 'pageProps', 'seriePageData', {dict}))
|
||||||
|
if not series_data:
|
||||||
|
raise ExtractorError('Missing series data.')
|
||||||
|
|
||||||
|
return self.playlist_result(
|
||||||
|
traverse_obj(series_data, ('videos', ..., {dict}, {self._extract_url_info})),
|
||||||
|
crn_id, series_data.get('title'), clean_html(series_data.get('description')))
|
||||||
|
|
||||||
|
|
||||||
|
class BibelTVLiveIE(BibelTVBaseIE):
|
||||||
|
IE_DESC = 'BibelTV live program'
|
||||||
|
_VALID_URL = r'https?://(?:www\.)?bibeltv\.de/livestreams/(?P<id>[\w-]+)'
|
||||||
|
IE_NAME = 'bibeltv:live'
|
||||||
|
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://www.bibeltv.de/livestreams/bibeltv/',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'bibeltv',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 're:Bibel TV',
|
||||||
|
'live_status': 'is_live',
|
||||||
|
'thumbnail': 'https://streampreview.bibeltv.de/bibeltv.webp',
|
||||||
|
},
|
||||||
|
'params': {'skip_download': 'm3u8'},
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.bibeltv.de/livestreams/impuls/',
|
||||||
|
'only_matching': True,
|
||||||
|
}]
|
||||||
|
|
||||||
|
def _real_extract(self, url):
|
||||||
|
stream_id = self._match_id(url)
|
||||||
|
webpage = self._download_webpage(url, stream_id)
|
||||||
|
stream_data = self._search_json(
|
||||||
|
r'\\"video\\":', webpage, 'bibeltvData', stream_id,
|
||||||
|
transform_source=lambda jstring: js_to_json(jstring.replace('\\"', '"')))
|
||||||
|
|
||||||
|
formats, subtitles = self._extract_formats_and_subtitles(
|
||||||
|
traverse_obj(stream_data, ('src', ...)), stream_id, is_live=True)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'id': stream_id,
|
||||||
|
'title': stream_data.get('title'),
|
||||||
|
'thumbnail': stream_data.get('poster'),
|
||||||
|
'is_live': True,
|
||||||
|
'formats': formats,
|
||||||
|
'subtitles': subtitles,
|
||||||
|
}
|
||||||
|
@ -1,11 +1,14 @@
|
|||||||
import base64
|
import base64
|
||||||
import functools
|
import functools
|
||||||
|
import hashlib
|
||||||
import itertools
|
import itertools
|
||||||
import math
|
import math
|
||||||
|
import time
|
||||||
import urllib.error
|
import urllib.error
|
||||||
import urllib.parse
|
import urllib.parse
|
||||||
|
|
||||||
from .common import InfoExtractor, SearchInfoExtractor
|
from .common import InfoExtractor, SearchInfoExtractor
|
||||||
|
from ..dependencies import Cryptodome
|
||||||
from ..utils import (
|
from ..utils import (
|
||||||
ExtractorError,
|
ExtractorError,
|
||||||
GeoRestrictedError,
|
GeoRestrictedError,
|
||||||
@ -25,6 +28,8 @@
|
|||||||
srt_subtitles_timecode,
|
srt_subtitles_timecode,
|
||||||
str_or_none,
|
str_or_none,
|
||||||
traverse_obj,
|
traverse_obj,
|
||||||
|
try_call,
|
||||||
|
unified_timestamp,
|
||||||
unsmuggle_url,
|
unsmuggle_url,
|
||||||
url_or_none,
|
url_or_none,
|
||||||
urlencode_postdata,
|
urlencode_postdata,
|
||||||
@ -80,7 +85,7 @@ def json2srt(self, json_data):
|
|||||||
f'{line["content"]}\n\n')
|
f'{line["content"]}\n\n')
|
||||||
return srt_data
|
return srt_data
|
||||||
|
|
||||||
def _get_subtitles(self, video_id, initial_state, cid):
|
def _get_subtitles(self, video_id, aid, cid):
|
||||||
subtitles = {
|
subtitles = {
|
||||||
'danmaku': [{
|
'danmaku': [{
|
||||||
'ext': 'xml',
|
'ext': 'xml',
|
||||||
@ -88,7 +93,8 @@ def _get_subtitles(self, video_id, initial_state, cid):
|
|||||||
}]
|
}]
|
||||||
}
|
}
|
||||||
|
|
||||||
for s in traverse_obj(initial_state, ('videoData', 'subtitle', 'list')) or []:
|
video_info_json = self._download_json(f'https://api.bilibili.com/x/player/v2?aid={aid}&cid={cid}', video_id)
|
||||||
|
for s in traverse_obj(video_info_json, ('data', 'subtitle', 'subtitles', ...)):
|
||||||
subtitles.setdefault(s['lan'], []).append({
|
subtitles.setdefault(s['lan'], []).append({
|
||||||
'ext': 'srt',
|
'ext': 'srt',
|
||||||
'data': self.json2srt(self._download_json(s['subtitle_url'], video_id))
|
'data': self.json2srt(self._download_json(s['subtitle_url'], video_id))
|
||||||
@ -131,7 +137,7 @@ def _get_all_children(self, reply):
|
|||||||
|
|
||||||
|
|
||||||
class BiliBiliIE(BilibiliBaseIE):
|
class BiliBiliIE(BilibiliBaseIE):
|
||||||
_VALID_URL = r'https?://www\.bilibili\.com/video/[aAbB][vV](?P<id>[^/?#&]+)'
|
_VALID_URL = r'https?://www\.bilibili\.com/(?:video/|festival/\w+\?(?:[^#]*&)?bvid=)[aAbB][vV](?P<id>[^/?#&]+)'
|
||||||
|
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'https://www.bilibili.com/video/BV13x41117TL',
|
'url': 'https://www.bilibili.com/video/BV13x41117TL',
|
||||||
@ -279,19 +285,60 @@ class BiliBiliIE(BilibiliBaseIE):
|
|||||||
'thumbnail': r're:^https?://.*\.(jpg|jpeg|png)$',
|
'thumbnail': r're:^https?://.*\.(jpg|jpeg|png)$',
|
||||||
},
|
},
|
||||||
'params': {'skip_download': True},
|
'params': {'skip_download': True},
|
||||||
|
}, {
|
||||||
|
'note': 'video redirects to festival page',
|
||||||
|
'url': 'https://www.bilibili.com/video/BV1wP4y1P72h',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'BV1wP4y1P72h',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': '牛虎年相交之际,一首传统民族打击乐《牛斗虎》祝大家新春快乐,虎年大吉!【bilibili音乐虎闹新春】',
|
||||||
|
'timestamp': 1643947497,
|
||||||
|
'upload_date': '20220204',
|
||||||
|
'description': 'md5:8681a0d4d2c06b4ae27e59c8080a7fe6',
|
||||||
|
'uploader': '叨叨冯聊音乐',
|
||||||
|
'duration': 246.719,
|
||||||
|
'uploader_id': '528182630',
|
||||||
|
'view_count': int,
|
||||||
|
'like_count': int,
|
||||||
|
'thumbnail': r're:^https?://.*\.(jpg|jpeg|png)$',
|
||||||
|
},
|
||||||
|
'params': {'skip_download': True},
|
||||||
|
}, {
|
||||||
|
'note': 'newer festival video',
|
||||||
|
'url': 'https://www.bilibili.com/festival/2023honkaiimpact3gala?bvid=BV1ay4y1d77f',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'BV1ay4y1d77f',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': '【崩坏3新春剧场】为特别的你送上祝福!',
|
||||||
|
'timestamp': 1674273600,
|
||||||
|
'upload_date': '20230121',
|
||||||
|
'description': 'md5:58af66d15c6a0122dc30c8adfd828dd8',
|
||||||
|
'uploader': '果蝇轰',
|
||||||
|
'duration': 1111.722,
|
||||||
|
'uploader_id': '8469526',
|
||||||
|
'view_count': int,
|
||||||
|
'like_count': int,
|
||||||
|
'thumbnail': r're:^https?://.*\.(jpg|jpeg|png)$',
|
||||||
|
},
|
||||||
|
'params': {'skip_download': True},
|
||||||
}]
|
}]
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
video_id = self._match_id(url)
|
video_id = self._match_id(url)
|
||||||
webpage = self._download_webpage(url, video_id)
|
webpage = self._download_webpage(url, video_id)
|
||||||
initial_state = self._search_json(r'window\.__INITIAL_STATE__\s*=', webpage, 'initial state', video_id)
|
initial_state = self._search_json(r'window\.__INITIAL_STATE__\s*=', webpage, 'initial state', video_id)
|
||||||
play_info = self._search_json(r'window\.__playinfo__\s*=', webpage, 'play info', video_id)['data']
|
|
||||||
|
|
||||||
video_data = initial_state['videoData']
|
is_festival = 'videoData' not in initial_state
|
||||||
|
if is_festival:
|
||||||
|
video_data = initial_state['videoInfo']
|
||||||
|
else:
|
||||||
|
play_info = self._search_json(r'window\.__playinfo__\s*=', webpage, 'play info', video_id)['data']
|
||||||
|
video_data = initial_state['videoData']
|
||||||
|
|
||||||
video_id, title = video_data['bvid'], video_data.get('title')
|
video_id, title = video_data['bvid'], video_data.get('title')
|
||||||
|
|
||||||
# Bilibili anthologies are similar to playlists but all videos share the same video ID as the anthology itself.
|
# Bilibili anthologies are similar to playlists but all videos share the same video ID as the anthology itself.
|
||||||
page_list_json = traverse_obj(
|
page_list_json = not is_festival and traverse_obj(
|
||||||
self._download_json(
|
self._download_json(
|
||||||
'https://api.bilibili.com/x/player/pagelist', video_id,
|
'https://api.bilibili.com/x/player/pagelist', video_id,
|
||||||
fatal=False, query={'bvid': video_id, 'jsonp': 'jsonp'},
|
fatal=False, query={'bvid': video_id, 'jsonp': 'jsonp'},
|
||||||
@ -314,23 +361,42 @@ def _real_extract(self, url):
|
|||||||
|
|
||||||
cid = traverse_obj(video_data, ('pages', part_id - 1, 'cid')) if part_id else video_data.get('cid')
|
cid = traverse_obj(video_data, ('pages', part_id - 1, 'cid')) if part_id else video_data.get('cid')
|
||||||
|
|
||||||
|
festival_info = {}
|
||||||
|
if is_festival:
|
||||||
|
play_info = self._download_json(
|
||||||
|
'https://api.bilibili.com/x/player/playurl', video_id,
|
||||||
|
query={'bvid': video_id, 'cid': cid, 'fnval': 4048},
|
||||||
|
note='Extracting festival video formats')['data']
|
||||||
|
|
||||||
|
festival_info = traverse_obj(initial_state, {
|
||||||
|
'uploader': ('videoInfo', 'upName'),
|
||||||
|
'uploader_id': ('videoInfo', 'upMid', {str_or_none}),
|
||||||
|
'like_count': ('videoStatus', 'like', {int_or_none}),
|
||||||
|
'thumbnail': ('sectionEpisodes', lambda _, v: v['bvid'] == video_id, 'cover'),
|
||||||
|
}, get_all=False)
|
||||||
|
|
||||||
return {
|
return {
|
||||||
|
**traverse_obj(initial_state, {
|
||||||
|
'uploader': ('upData', 'name'),
|
||||||
|
'uploader_id': ('upData', 'mid', {str_or_none}),
|
||||||
|
'like_count': ('videoData', 'stat', 'like', {int_or_none}),
|
||||||
|
'tags': ('tags', ..., 'tag_name'),
|
||||||
|
'thumbnail': ('videoData', 'pic', {url_or_none}),
|
||||||
|
}),
|
||||||
|
**festival_info,
|
||||||
|
**traverse_obj(video_data, {
|
||||||
|
'description': 'desc',
|
||||||
|
'timestamp': ('pubdate', {int_or_none}),
|
||||||
|
'view_count': (('viewCount', ('stat', 'view')), {int_or_none}),
|
||||||
|
'comment_count': ('stat', 'reply', {int_or_none}),
|
||||||
|
}, get_all=False),
|
||||||
'id': f'{video_id}{format_field(part_id, None, "_p%d")}',
|
'id': f'{video_id}{format_field(part_id, None, "_p%d")}',
|
||||||
'formats': self.extract_formats(play_info),
|
'formats': self.extract_formats(play_info),
|
||||||
'_old_archive_ids': [make_archive_id(self, old_video_id)] if old_video_id else None,
|
'_old_archive_ids': [make_archive_id(self, old_video_id)] if old_video_id else None,
|
||||||
'title': title,
|
'title': title,
|
||||||
'description': traverse_obj(initial_state, ('videoData', 'desc')),
|
|
||||||
'view_count': traverse_obj(initial_state, ('videoData', 'stat', 'view')),
|
|
||||||
'uploader': traverse_obj(initial_state, ('upData', 'name')),
|
|
||||||
'uploader_id': traverse_obj(initial_state, ('upData', 'mid')),
|
|
||||||
'like_count': traverse_obj(initial_state, ('videoData', 'stat', 'like')),
|
|
||||||
'comment_count': traverse_obj(initial_state, ('videoData', 'stat', 'reply')),
|
|
||||||
'tags': traverse_obj(initial_state, ('tags', ..., 'tag_name')),
|
|
||||||
'thumbnail': traverse_obj(initial_state, ('videoData', 'pic')),
|
|
||||||
'timestamp': traverse_obj(initial_state, ('videoData', 'pubdate')),
|
|
||||||
'duration': float_or_none(play_info.get('timelength'), scale=1000),
|
'duration': float_or_none(play_info.get('timelength'), scale=1000),
|
||||||
'chapters': self._get_chapters(aid, cid),
|
'chapters': self._get_chapters(aid, cid),
|
||||||
'subtitles': self.extract_subtitles(video_id, initial_state, cid),
|
'subtitles': self.extract_subtitles(video_id, aid, cid),
|
||||||
'__post_extractor': self.extract_comments(aid),
|
'__post_extractor': self.extract_comments(aid),
|
||||||
'http_headers': {'Referer': url},
|
'http_headers': {'Referer': url},
|
||||||
}
|
}
|
||||||
@ -451,19 +517,63 @@ class BilibiliSpaceVideoIE(BilibiliSpaceBaseIE):
|
|||||||
'id': '3985676',
|
'id': '3985676',
|
||||||
},
|
},
|
||||||
'playlist_mincount': 178,
|
'playlist_mincount': 178,
|
||||||
|
}, {
|
||||||
|
'url': 'https://space.bilibili.com/313580179/video',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '313580179',
|
||||||
|
},
|
||||||
|
'playlist_mincount': 92,
|
||||||
}]
|
}]
|
||||||
|
|
||||||
|
def _extract_signature(self, playlist_id):
|
||||||
|
session_data = self._download_json('https://api.bilibili.com/x/web-interface/nav', playlist_id, fatal=False)
|
||||||
|
|
||||||
|
key_from_url = lambda x: x[x.rfind('/') + 1:].split('.')[0]
|
||||||
|
img_key = traverse_obj(
|
||||||
|
session_data, ('data', 'wbi_img', 'img_url', {key_from_url})) or '34478ba821254d9d93542680e3b86100'
|
||||||
|
sub_key = traverse_obj(
|
||||||
|
session_data, ('data', 'wbi_img', 'sub_url', {key_from_url})) or '7e16a90d190a4355a78fd00b32a38de6'
|
||||||
|
|
||||||
|
session_key = img_key + sub_key
|
||||||
|
|
||||||
|
signature_values = []
|
||||||
|
for position in (
|
||||||
|
46, 47, 18, 2, 53, 8, 23, 32, 15, 50, 10, 31, 58, 3, 45, 35, 27, 43, 5, 49, 33, 9, 42, 19, 29, 28, 14, 39,
|
||||||
|
12, 38, 41, 13, 37, 48, 7, 16, 24, 55, 40, 61, 26, 17, 0, 1, 60, 51, 30, 4, 22, 25, 54, 21, 56, 59, 6, 63,
|
||||||
|
57, 62, 11, 36, 20, 34, 44, 52
|
||||||
|
):
|
||||||
|
char_at_position = try_call(lambda: session_key[position])
|
||||||
|
if char_at_position:
|
||||||
|
signature_values.append(char_at_position)
|
||||||
|
|
||||||
|
return ''.join(signature_values)[:32]
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
playlist_id, is_video_url = self._match_valid_url(url).group('id', 'video')
|
playlist_id, is_video_url = self._match_valid_url(url).group('id', 'video')
|
||||||
if not is_video_url:
|
if not is_video_url:
|
||||||
self.to_screen('A channel URL was given. Only the channel\'s videos will be downloaded. '
|
self.to_screen('A channel URL was given. Only the channel\'s videos will be downloaded. '
|
||||||
'To download audios, add a "/audio" to the URL')
|
'To download audios, add a "/audio" to the URL')
|
||||||
|
|
||||||
|
signature = self._extract_signature(playlist_id)
|
||||||
|
|
||||||
def fetch_page(page_idx):
|
def fetch_page(page_idx):
|
||||||
|
query = {
|
||||||
|
'keyword': '',
|
||||||
|
'mid': playlist_id,
|
||||||
|
'order': 'pubdate',
|
||||||
|
'order_avoided': 'true',
|
||||||
|
'platform': 'web',
|
||||||
|
'pn': page_idx + 1,
|
||||||
|
'ps': 30,
|
||||||
|
'tid': 0,
|
||||||
|
'web_location': 1550101,
|
||||||
|
'wts': int(time.time()),
|
||||||
|
}
|
||||||
|
query['w_rid'] = hashlib.md5(f'{urllib.parse.urlencode(query)}{signature}'.encode()).hexdigest()
|
||||||
|
|
||||||
try:
|
try:
|
||||||
response = self._download_json('https://api.bilibili.com/x/space/arc/search',
|
response = self._download_json('https://api.bilibili.com/x/space/wbi/arc/search',
|
||||||
playlist_id, note=f'Downloading page {page_idx}',
|
playlist_id, note=f'Downloading page {page_idx}', query=query)
|
||||||
query={'mid': playlist_id, 'pn': page_idx + 1, 'jsonp': 'jsonp'})
|
|
||||||
except ExtractorError as e:
|
except ExtractorError as e:
|
||||||
if isinstance(e.cause, urllib.error.HTTPError) and e.cause.code == 412:
|
if isinstance(e.cause, urllib.error.HTTPError) and e.cause.code == 412:
|
||||||
raise ExtractorError(
|
raise ExtractorError(
|
||||||
@ -493,9 +603,9 @@ def get_entries(page_data):
|
|||||||
class BilibiliSpaceAudioIE(BilibiliSpaceBaseIE):
|
class BilibiliSpaceAudioIE(BilibiliSpaceBaseIE):
|
||||||
_VALID_URL = r'https?://space\.bilibili\.com/(?P<id>\d+)/audio'
|
_VALID_URL = r'https?://space\.bilibili\.com/(?P<id>\d+)/audio'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'https://space.bilibili.com/3985676/audio',
|
'url': 'https://space.bilibili.com/313580179/audio',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': '3985676',
|
'id': '313580179',
|
||||||
},
|
},
|
||||||
'playlist_mincount': 1,
|
'playlist_mincount': 1,
|
||||||
}]
|
}]
|
||||||
@ -893,22 +1003,15 @@ def _parse_video_metadata(self, video_data):
|
|||||||
}
|
}
|
||||||
|
|
||||||
def _perform_login(self, username, password):
|
def _perform_login(self, username, password):
|
||||||
try:
|
if not Cryptodome.RSA:
|
||||||
from Cryptodome.PublicKey import RSA
|
raise ExtractorError('pycryptodomex not found. Please install', expected=True)
|
||||||
from Cryptodome.Cipher import PKCS1_v1_5
|
|
||||||
except ImportError:
|
|
||||||
try:
|
|
||||||
from Crypto.PublicKey import RSA
|
|
||||||
from Crypto.Cipher import PKCS1_v1_5
|
|
||||||
except ImportError:
|
|
||||||
raise ExtractorError('pycryptodomex not found. Please install', expected=True)
|
|
||||||
|
|
||||||
key_data = self._download_json(
|
key_data = self._download_json(
|
||||||
'https://passport.bilibili.tv/x/intl/passport-login/web/key?lang=en-US', None,
|
'https://passport.bilibili.tv/x/intl/passport-login/web/key?lang=en-US', None,
|
||||||
note='Downloading login key', errnote='Unable to download login key')['data']
|
note='Downloading login key', errnote='Unable to download login key')['data']
|
||||||
|
|
||||||
public_key = RSA.importKey(key_data['key'])
|
public_key = Cryptodome.RSA.importKey(key_data['key'])
|
||||||
password_hash = PKCS1_v1_5.new(public_key).encrypt((key_data['hash'] + password).encode('utf-8'))
|
password_hash = Cryptodome.PKCS1_v1_5.new(public_key).encrypt((key_data['hash'] + password).encode('utf-8'))
|
||||||
login_post = self._download_json(
|
login_post = self._download_json(
|
||||||
'https://passport.bilibili.tv/x/intl/passport-login/web/login/password?lang=en-US', None, data=urlencode_postdata({
|
'https://passport.bilibili.tv/x/intl/passport-login/web/login/password?lang=en-US', None, data=urlencode_postdata({
|
||||||
'username': username,
|
'username': username,
|
||||||
@ -939,6 +1042,19 @@ class BiliIntlIE(BiliIntlBaseIE):
|
|||||||
'episode': 'Episode 2',
|
'episode': 'Episode 2',
|
||||||
'timestamp': 1602259500,
|
'timestamp': 1602259500,
|
||||||
'description': 'md5:297b5a17155eb645e14a14b385ab547e',
|
'description': 'md5:297b5a17155eb645e14a14b385ab547e',
|
||||||
|
'chapters': [{
|
||||||
|
'start_time': 0,
|
||||||
|
'end_time': 76.242,
|
||||||
|
'title': '<Untitled Chapter 1>'
|
||||||
|
}, {
|
||||||
|
'start_time': 76.242,
|
||||||
|
'end_time': 161.161,
|
||||||
|
'title': 'Intro'
|
||||||
|
}, {
|
||||||
|
'start_time': 1325.742,
|
||||||
|
'end_time': 1403.903,
|
||||||
|
'title': 'Outro'
|
||||||
|
}],
|
||||||
}
|
}
|
||||||
}, {
|
}, {
|
||||||
# Non-Bstation page
|
# Non-Bstation page
|
||||||
@ -953,6 +1069,19 @@ class BiliIntlIE(BiliIntlBaseIE):
|
|||||||
'episode': 'Episode 3',
|
'episode': 'Episode 3',
|
||||||
'upload_date': '20211219',
|
'upload_date': '20211219',
|
||||||
'timestamp': 1639928700,
|
'timestamp': 1639928700,
|
||||||
|
'chapters': [{
|
||||||
|
'start_time': 0,
|
||||||
|
'end_time': 88.0,
|
||||||
|
'title': '<Untitled Chapter 1>'
|
||||||
|
}, {
|
||||||
|
'start_time': 88.0,
|
||||||
|
'end_time': 156.0,
|
||||||
|
'title': 'Intro'
|
||||||
|
}, {
|
||||||
|
'start_time': 1173.0,
|
||||||
|
'end_time': 1259.535,
|
||||||
|
'title': 'Outro'
|
||||||
|
}],
|
||||||
}
|
}
|
||||||
}, {
|
}, {
|
||||||
# Subtitle with empty content
|
# Subtitle with empty content
|
||||||
@ -975,7 +1104,68 @@ class BiliIntlIE(BiliIntlBaseIE):
|
|||||||
'thumbnail': r're:https?://pic[-\.]bstarstatic.+/ugc/.+\.jpg$',
|
'thumbnail': r're:https?://pic[-\.]bstarstatic.+/ugc/.+\.jpg$',
|
||||||
'upload_date': '20221212',
|
'upload_date': '20221212',
|
||||||
'title': 'Kimetsu no Yaiba Season 3 Official Trailer - Bstation',
|
'title': 'Kimetsu no Yaiba Season 3 Official Trailer - Bstation',
|
||||||
|
},
|
||||||
|
}, {
|
||||||
|
# episode comment extraction
|
||||||
|
'url': 'https://www.bilibili.tv/en/play/34580/340317',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '340317',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'timestamp': 1604057820,
|
||||||
|
'upload_date': '20201030',
|
||||||
|
'episode_number': 5,
|
||||||
|
'title': 'E5 - My Own Steel',
|
||||||
|
'description': 'md5:2b17ab10aebb33e3c2a54da9e8e487e2',
|
||||||
|
'thumbnail': r're:https?://pic\.bstarstatic\.com/ogv/.+\.png$',
|
||||||
|
'episode': 'Episode 5',
|
||||||
|
'comment_count': int,
|
||||||
|
'chapters': [{
|
||||||
|
'start_time': 0,
|
||||||
|
'end_time': 61.0,
|
||||||
|
'title': '<Untitled Chapter 1>'
|
||||||
|
}, {
|
||||||
|
'start_time': 61.0,
|
||||||
|
'end_time': 134.0,
|
||||||
|
'title': 'Intro'
|
||||||
|
}, {
|
||||||
|
'start_time': 1290.0,
|
||||||
|
'end_time': 1379.0,
|
||||||
|
'title': 'Outro'
|
||||||
|
}],
|
||||||
|
},
|
||||||
|
'params': {
|
||||||
|
'getcomments': True
|
||||||
}
|
}
|
||||||
|
}, {
|
||||||
|
# user generated content comment extraction
|
||||||
|
'url': 'https://www.bilibili.tv/en/video/2045730385',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '2045730385',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'description': 'md5:693b6f3967fb4e7e7764ea817857c33a',
|
||||||
|
'timestamp': 1667891924,
|
||||||
|
'upload_date': '20221108',
|
||||||
|
'title': 'That Time I Got Reincarnated as a Slime: Scarlet Bond - Official Trailer 3| AnimeStan - Bstation',
|
||||||
|
'comment_count': int,
|
||||||
|
'thumbnail': 'https://pic.bstarstatic.com/ugc/f6c363659efd2eabe5683fbb906b1582.jpg',
|
||||||
|
},
|
||||||
|
'params': {
|
||||||
|
'getcomments': True
|
||||||
|
}
|
||||||
|
}, {
|
||||||
|
# episode id without intro and outro
|
||||||
|
'url': 'https://www.bilibili.tv/en/play/1048837/11246489',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '11246489',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'E1 - Operation \'Strix\' <Owl>',
|
||||||
|
'description': 'md5:b4434eb1a9a97ad2bccb779514b89f17',
|
||||||
|
'timestamp': 1649516400,
|
||||||
|
'thumbnail': 'https://pic.bstarstatic.com/ogv/62cb1de23ada17fb70fbe7bdd6ff29c29da02a64.png',
|
||||||
|
'episode': 'Episode 1',
|
||||||
|
'episode_number': 1,
|
||||||
|
'upload_date': '20220409',
|
||||||
|
},
|
||||||
}, {
|
}, {
|
||||||
'url': 'https://www.biliintl.com/en/play/34613/341736',
|
'url': 'https://www.biliintl.com/en/play/34613/341736',
|
||||||
'only_matching': True,
|
'only_matching': True,
|
||||||
@ -1020,20 +1210,98 @@ def _extract_video_metadata(self, url, video_id, season_id):
|
|||||||
|
|
||||||
# XXX: webpage metadata may not accurate, it just used to not crash when video_data not found
|
# XXX: webpage metadata may not accurate, it just used to not crash when video_data not found
|
||||||
return merge_dicts(
|
return merge_dicts(
|
||||||
self._parse_video_metadata(video_data), self._search_json_ld(webpage, video_id), {
|
self._parse_video_metadata(video_data), self._search_json_ld(webpage, video_id, fatal=False), {
|
||||||
'title': self._html_search_meta('og:title', webpage),
|
'title': self._html_search_meta('og:title', webpage),
|
||||||
'description': self._html_search_meta('og:description', webpage)
|
'description': self._html_search_meta('og:description', webpage)
|
||||||
})
|
})
|
||||||
|
|
||||||
|
def _get_comments_reply(self, root_id, next_id=0, display_id=None):
|
||||||
|
comment_api_raw_data = self._download_json(
|
||||||
|
'https://api.bilibili.tv/reply/web/detail', display_id,
|
||||||
|
note=f'Downloading reply comment of {root_id} - {next_id}',
|
||||||
|
query={
|
||||||
|
'platform': 'web',
|
||||||
|
'ps': 20, # comment's reply per page (default: 3)
|
||||||
|
'root': root_id,
|
||||||
|
'next': next_id,
|
||||||
|
})
|
||||||
|
|
||||||
|
for replies in traverse_obj(comment_api_raw_data, ('data', 'replies', ...)):
|
||||||
|
yield {
|
||||||
|
'author': traverse_obj(replies, ('member', 'name')),
|
||||||
|
'author_id': traverse_obj(replies, ('member', 'mid')),
|
||||||
|
'author_thumbnail': traverse_obj(replies, ('member', 'face')),
|
||||||
|
'text': traverse_obj(replies, ('content', 'message')),
|
||||||
|
'id': replies.get('rpid'),
|
||||||
|
'like_count': int_or_none(replies.get('like_count')),
|
||||||
|
'parent': replies.get('parent'),
|
||||||
|
'timestamp': unified_timestamp(replies.get('ctime_text'))
|
||||||
|
}
|
||||||
|
|
||||||
|
if not traverse_obj(comment_api_raw_data, ('data', 'cursor', 'is_end')):
|
||||||
|
yield from self._get_comments_reply(
|
||||||
|
root_id, comment_api_raw_data['data']['cursor']['next'], display_id)
|
||||||
|
|
||||||
|
def _get_comments(self, video_id, ep_id):
|
||||||
|
for i in itertools.count(0):
|
||||||
|
comment_api_raw_data = self._download_json(
|
||||||
|
'https://api.bilibili.tv/reply/web/root', video_id,
|
||||||
|
note=f'Downloading comment page {i + 1}',
|
||||||
|
query={
|
||||||
|
'platform': 'web',
|
||||||
|
'pn': i, # page number
|
||||||
|
'ps': 20, # comment per page (default: 20)
|
||||||
|
'oid': video_id,
|
||||||
|
'type': 3 if ep_id else 1, # 1: user generated content, 3: series content
|
||||||
|
'sort_type': 1, # 1: best, 2: recent
|
||||||
|
})
|
||||||
|
|
||||||
|
for replies in traverse_obj(comment_api_raw_data, ('data', 'replies', ...)):
|
||||||
|
yield {
|
||||||
|
'author': traverse_obj(replies, ('member', 'name')),
|
||||||
|
'author_id': traverse_obj(replies, ('member', 'mid')),
|
||||||
|
'author_thumbnail': traverse_obj(replies, ('member', 'face')),
|
||||||
|
'text': traverse_obj(replies, ('content', 'message')),
|
||||||
|
'id': replies.get('rpid'),
|
||||||
|
'like_count': int_or_none(replies.get('like_count')),
|
||||||
|
'timestamp': unified_timestamp(replies.get('ctime_text')),
|
||||||
|
'author_is_uploader': bool(traverse_obj(replies, ('member', 'type'))),
|
||||||
|
}
|
||||||
|
if replies.get('count'):
|
||||||
|
yield from self._get_comments_reply(replies.get('rpid'), display_id=video_id)
|
||||||
|
|
||||||
|
if traverse_obj(comment_api_raw_data, ('data', 'cursor', 'is_end')):
|
||||||
|
break
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
season_id, ep_id, aid = self._match_valid_url(url).group('season_id', 'ep_id', 'aid')
|
season_id, ep_id, aid = self._match_valid_url(url).group('season_id', 'ep_id', 'aid')
|
||||||
video_id = ep_id or aid
|
video_id = ep_id or aid
|
||||||
|
chapters = None
|
||||||
|
|
||||||
|
if ep_id:
|
||||||
|
intro_ending_json = self._call_api(
|
||||||
|
f'/web/v2/ogv/play/episode?episode_id={ep_id}&platform=web',
|
||||||
|
video_id, fatal=False) or {}
|
||||||
|
if intro_ending_json.get('skip'):
|
||||||
|
# FIXME: start time and end time seems a bit off a few second even it corrext based on ogv.*.js
|
||||||
|
# ref: https://p.bstarstatic.com/fe-static/bstar-web-new/assets/ogv.2b147442.js
|
||||||
|
chapters = [{
|
||||||
|
'start_time': float_or_none(traverse_obj(intro_ending_json, ('skip', 'opening_start_time')), 1000),
|
||||||
|
'end_time': float_or_none(traverse_obj(intro_ending_json, ('skip', 'opening_end_time')), 1000),
|
||||||
|
'title': 'Intro'
|
||||||
|
}, {
|
||||||
|
'start_time': float_or_none(traverse_obj(intro_ending_json, ('skip', 'ending_start_time')), 1000),
|
||||||
|
'end_time': float_or_none(traverse_obj(intro_ending_json, ('skip', 'ending_end_time')), 1000),
|
||||||
|
'title': 'Outro'
|
||||||
|
}]
|
||||||
|
|
||||||
return {
|
return {
|
||||||
'id': video_id,
|
'id': video_id,
|
||||||
**self._extract_video_metadata(url, video_id, season_id),
|
**self._extract_video_metadata(url, video_id, season_id),
|
||||||
'formats': self._get_formats(ep_id=ep_id, aid=aid),
|
'formats': self._get_formats(ep_id=ep_id, aid=aid),
|
||||||
'subtitles': self.extract_subtitles(ep_id=ep_id, aid=aid),
|
'subtitles': self.extract_subtitles(ep_id=ep_id, aid=aid),
|
||||||
|
'chapters': chapters,
|
||||||
|
'__post_extractor': self.extract_comments(video_id, ep_id)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@ -77,7 +77,10 @@ class BitChuteIE(InfoExtractor):
|
|||||||
def _check_format(self, video_url, video_id):
|
def _check_format(self, video_url, video_id):
|
||||||
urls = orderedSet(
|
urls = orderedSet(
|
||||||
re.sub(r'(^https?://)(seed\d+)(?=\.bitchute\.com)', fr'\g<1>{host}', video_url)
|
re.sub(r'(^https?://)(seed\d+)(?=\.bitchute\.com)', fr'\g<1>{host}', video_url)
|
||||||
for host in (r'\g<2>', 'seed150', 'seed151', 'seed152', 'seed153'))
|
for host in (r'\g<2>', 'seed122', 'seed125', 'seed126', 'seed128',
|
||||||
|
'seed132', 'seed150', 'seed151', 'seed152', 'seed153',
|
||||||
|
'seed167', 'seed171', 'seed177', 'seed305', 'seed307',
|
||||||
|
'seedp29xb', 'zb10-7gsop1v78'))
|
||||||
for url in urls:
|
for url in urls:
|
||||||
try:
|
try:
|
||||||
response = self._request_webpage(
|
response = self._request_webpage(
|
||||||
|
167
yt_dlp/extractor/blerp.py
Normal file
167
yt_dlp/extractor/blerp.py
Normal file
@ -0,0 +1,167 @@
|
|||||||
|
import json
|
||||||
|
|
||||||
|
from .common import InfoExtractor
|
||||||
|
from ..utils import strip_or_none, traverse_obj
|
||||||
|
|
||||||
|
|
||||||
|
class BlerpIE(InfoExtractor):
|
||||||
|
IE_NAME = 'blerp'
|
||||||
|
_VALID_URL = r'https?://(?:www\.)?blerp\.com/soundbites/(?P<id>[0-9a-zA-Z]+)'
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://blerp.com/soundbites/6320fe8745636cb4dd677a5a',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '6320fe8745636cb4dd677a5a',
|
||||||
|
'title': 'Samsung Galaxy S8 Over the Horizon Ringtone 2016',
|
||||||
|
'uploader': 'luminousaj',
|
||||||
|
'uploader_id': '5fb81e51aa66ae000c395478',
|
||||||
|
'ext': 'mp3',
|
||||||
|
'tags': ['samsung', 'galaxy', 's8', 'over the horizon', '2016', 'ringtone'],
|
||||||
|
}
|
||||||
|
}, {
|
||||||
|
'url': 'https://blerp.com/soundbites/5bc94ef4796001000498429f',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '5bc94ef4796001000498429f',
|
||||||
|
'title': 'Yee',
|
||||||
|
'uploader': '179617322678353920',
|
||||||
|
'uploader_id': '5ba99cf71386730004552c42',
|
||||||
|
'ext': 'mp3',
|
||||||
|
'tags': ['YEE', 'YEET', 'wo ha haah catchy tune yee', 'yee']
|
||||||
|
}
|
||||||
|
}]
|
||||||
|
|
||||||
|
_GRAPHQL_OPERATIONNAME = "webBitePageGetBite"
|
||||||
|
_GRAPHQL_QUERY = (
|
||||||
|
'''query webBitePageGetBite($_id: MongoID!) {
|
||||||
|
web {
|
||||||
|
biteById(_id: $_id) {
|
||||||
|
...bitePageFrag
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fragment bitePageFrag on Bite {
|
||||||
|
_id
|
||||||
|
title
|
||||||
|
userKeywords
|
||||||
|
keywords
|
||||||
|
color
|
||||||
|
visibility
|
||||||
|
isPremium
|
||||||
|
owned
|
||||||
|
price
|
||||||
|
extraReview
|
||||||
|
isAudioExists
|
||||||
|
image {
|
||||||
|
filename
|
||||||
|
original {
|
||||||
|
url
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
userReactions {
|
||||||
|
_id
|
||||||
|
reactions
|
||||||
|
createdAt
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
topReactions
|
||||||
|
totalSaveCount
|
||||||
|
saved
|
||||||
|
blerpLibraryType
|
||||||
|
license
|
||||||
|
licenseMetaData
|
||||||
|
playCount
|
||||||
|
totalShareCount
|
||||||
|
totalFavoriteCount
|
||||||
|
totalAddedToBoardCount
|
||||||
|
userCategory
|
||||||
|
userAudioQuality
|
||||||
|
audioCreationState
|
||||||
|
transcription
|
||||||
|
userTranscription
|
||||||
|
description
|
||||||
|
createdAt
|
||||||
|
updatedAt
|
||||||
|
author
|
||||||
|
listingType
|
||||||
|
ownerObject {
|
||||||
|
_id
|
||||||
|
username
|
||||||
|
profileImage {
|
||||||
|
filename
|
||||||
|
original {
|
||||||
|
url
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
transcription
|
||||||
|
favorited
|
||||||
|
visibility
|
||||||
|
isCurated
|
||||||
|
sourceUrl
|
||||||
|
audienceRating
|
||||||
|
strictAudienceRating
|
||||||
|
ownerId
|
||||||
|
reportObject {
|
||||||
|
reportedContentStatus
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
giphy {
|
||||||
|
mp4
|
||||||
|
gif
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
audio {
|
||||||
|
filename
|
||||||
|
original {
|
||||||
|
url
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
mp3 {
|
||||||
|
url
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
|
||||||
|
''')
|
||||||
|
|
||||||
|
def _real_extract(self, url):
|
||||||
|
audio_id = self._match_id(url)
|
||||||
|
|
||||||
|
data = {
|
||||||
|
'operationName': self._GRAPHQL_OPERATIONNAME,
|
||||||
|
'query': self._GRAPHQL_QUERY,
|
||||||
|
'variables': {
|
||||||
|
'_id': audio_id
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
headers = {
|
||||||
|
'Content-Type': 'application/json'
|
||||||
|
}
|
||||||
|
|
||||||
|
json_result = self._download_json('https://api.blerp.com/graphql',
|
||||||
|
audio_id, data=json.dumps(data).encode('utf-8'), headers=headers)
|
||||||
|
|
||||||
|
bite_json = json_result['data']['web']['biteById']
|
||||||
|
|
||||||
|
info_dict = {
|
||||||
|
'id': bite_json['_id'],
|
||||||
|
'url': bite_json['audio']['mp3']['url'],
|
||||||
|
'title': bite_json['title'],
|
||||||
|
'uploader': traverse_obj(bite_json, ('ownerObject', 'username'), expected_type=strip_or_none),
|
||||||
|
'uploader_id': traverse_obj(bite_json, ('ownerObject', '_id'), expected_type=strip_or_none),
|
||||||
|
'ext': 'mp3',
|
||||||
|
'tags': list(filter(None, map(strip_or_none, (traverse_obj(bite_json, 'userKeywords', expected_type=list) or []))) or None)
|
||||||
|
}
|
||||||
|
|
||||||
|
return info_dict
|
@ -1,86 +0,0 @@
|
|||||||
from .common import InfoExtractor
|
|
||||||
from ..utils import int_or_none, str_or_none, traverse_obj
|
|
||||||
|
|
||||||
|
|
||||||
class BooyahBaseIE(InfoExtractor):
|
|
||||||
_BOOYAH_SESSION_KEY = None
|
|
||||||
|
|
||||||
def _real_initialize(self):
|
|
||||||
BooyahBaseIE._BOOYAH_SESSION_KEY = self._request_webpage(
|
|
||||||
'https://booyah.live/api/v3/auths/sessions', None, data=b'').getheader('booyah-session-key')
|
|
||||||
|
|
||||||
def _get_comments(self, video_id):
|
|
||||||
comment_json = self._download_json(
|
|
||||||
f'https://booyah.live/api/v3/playbacks/{video_id}/comments/tops', video_id,
|
|
||||||
headers={'Booyah-Session-Key': self._BOOYAH_SESSION_KEY}, fatal=False) or {}
|
|
||||||
|
|
||||||
return [{
|
|
||||||
'id': comment.get('comment_id'),
|
|
||||||
'author': comment.get('from_nickname'),
|
|
||||||
'author_id': comment.get('from_uid'),
|
|
||||||
'author_thumbnail': comment.get('from_thumbnail'),
|
|
||||||
'text': comment.get('content'),
|
|
||||||
'timestamp': comment.get('create_time'),
|
|
||||||
'like_count': comment.get('like_cnt'),
|
|
||||||
} for comment in comment_json.get('comment_list') or ()]
|
|
||||||
|
|
||||||
|
|
||||||
class BooyahClipsIE(BooyahBaseIE):
|
|
||||||
_VALID_URL = r'https?://booyah.live/clips/(?P<id>\d+)'
|
|
||||||
_TESTS = [{
|
|
||||||
'url': 'https://booyah.live/clips/13887261322952306617',
|
|
||||||
'info_dict': {
|
|
||||||
'id': '13887261322952306617',
|
|
||||||
'ext': 'mp4',
|
|
||||||
'view_count': int,
|
|
||||||
'duration': 30,
|
|
||||||
'channel_id': 90565760,
|
|
||||||
'like_count': int,
|
|
||||||
'title': 'Cayendo con estilo 😎',
|
|
||||||
'uploader': '♡LɪꜱGΛMER',
|
|
||||||
'comment_count': int,
|
|
||||||
'uploader_id': '90565760',
|
|
||||||
'thumbnail': 'https://resmambet-a.akamaihd.net/mambet-storage/Clip/90565760/90565760-27204374-fba0-409d-9d7b-63a48b5c0e75.jpg',
|
|
||||||
'upload_date': '20220617',
|
|
||||||
'timestamp': 1655490556,
|
|
||||||
'modified_timestamp': 1655490556,
|
|
||||||
'modified_date': '20220617',
|
|
||||||
}
|
|
||||||
}]
|
|
||||||
|
|
||||||
def _real_extract(self, url):
|
|
||||||
video_id = self._match_id(url)
|
|
||||||
json_data = self._download_json(
|
|
||||||
f'https://booyah.live/api/v3/playbacks/{video_id}', video_id,
|
|
||||||
headers={'Booyah-Session-key': self._BOOYAH_SESSION_KEY})
|
|
||||||
|
|
||||||
formats = []
|
|
||||||
for video_data in json_data['playback']['endpoint_list']:
|
|
||||||
formats.extend(({
|
|
||||||
'url': video_data.get('stream_url'),
|
|
||||||
'ext': 'mp4',
|
|
||||||
'height': video_data.get('resolution'),
|
|
||||||
}, {
|
|
||||||
'url': video_data.get('download_url'),
|
|
||||||
'ext': 'mp4',
|
|
||||||
'format_note': 'Watermarked',
|
|
||||||
'height': video_data.get('resolution'),
|
|
||||||
'preference': -10,
|
|
||||||
}))
|
|
||||||
|
|
||||||
return {
|
|
||||||
'id': video_id,
|
|
||||||
'title': traverse_obj(json_data, ('playback', 'name')),
|
|
||||||
'thumbnail': traverse_obj(json_data, ('playback', 'thumbnail_url')),
|
|
||||||
'formats': formats,
|
|
||||||
'view_count': traverse_obj(json_data, ('playback', 'views')),
|
|
||||||
'like_count': traverse_obj(json_data, ('playback', 'likes')),
|
|
||||||
'duration': traverse_obj(json_data, ('playback', 'duration')),
|
|
||||||
'comment_count': traverse_obj(json_data, ('playback', 'comment_cnt')),
|
|
||||||
'channel_id': traverse_obj(json_data, ('playback', 'channel_id')),
|
|
||||||
'uploader': traverse_obj(json_data, ('user', 'nickname')),
|
|
||||||
'uploader_id': str_or_none(traverse_obj(json_data, ('user', 'uid'))),
|
|
||||||
'modified_timestamp': int_or_none(traverse_obj(json_data, ('playback', 'update_time_ms')), 1000),
|
|
||||||
'timestamp': int_or_none(traverse_obj(json_data, ('playback', 'create_time_ms')), 1000),
|
|
||||||
'__post_extractor': self.extract_comments(video_id, self._get_comments(video_id)),
|
|
||||||
}
|
|
102
yt_dlp/extractor/boxcast.py
Normal file
102
yt_dlp/extractor/boxcast.py
Normal file
@ -0,0 +1,102 @@
|
|||||||
|
from .common import InfoExtractor
|
||||||
|
from ..utils import (
|
||||||
|
js_to_json,
|
||||||
|
traverse_obj,
|
||||||
|
unified_timestamp
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class BoxCastVideoIE(InfoExtractor):
|
||||||
|
_VALID_URL = r'''(?x)
|
||||||
|
https?://boxcast\.tv/(?:
|
||||||
|
view-embed/|
|
||||||
|
channel/\w+\?(?:[^#]+&)?b=|
|
||||||
|
video-portal/(?:\w+/){2}
|
||||||
|
)(?P<id>[\w-]+)'''
|
||||||
|
_EMBED_REGEX = [r'<iframe[^>]+src=["\'](?P<url>https?://boxcast\.tv/view-embed/[\w-]+)']
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://boxcast.tv/view-embed/in-the-midst-of-darkness-light-prevails-an-interdisciplinary-symposium-ozmq5eclj50ujl4bmpwx',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'da1eqqgkacngd5djlqld',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'thumbnail': r're:https?://uploads\.boxcast\.com/(?:[\w+-]+/){3}.+\.png$',
|
||||||
|
'title': 'In the Midst of Darkness Light Prevails: An Interdisciplinary Symposium',
|
||||||
|
'release_timestamp': 1670686812,
|
||||||
|
'release_date': '20221210',
|
||||||
|
'uploader_id': 're8w0v8hohhvpqtbskpe',
|
||||||
|
'uploader': 'Children\'s Health Defense',
|
||||||
|
}
|
||||||
|
}, {
|
||||||
|
'url': 'https://boxcast.tv/video-portal/vctwevwntun3o0ikq7af/rvyblnn0fxbfjx5nwxhl/otbpltj2kzkveo2qz3ad',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'otbpltj2kzkveo2qz3ad',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'uploader_id': 'vctwevwntun3o0ikq7af',
|
||||||
|
'uploader': 'Legacy Christian Church',
|
||||||
|
'title': 'The Quest | 1: Beginner\'s Bay | Jamie Schools',
|
||||||
|
'thumbnail': r're:https?://uploads.boxcast.com/(?:[\w-]+/){3}.+\.jpg'
|
||||||
|
}
|
||||||
|
}, {
|
||||||
|
'url': 'https://boxcast.tv/channel/z03fqwaeaby5lnaawox2?b=ssihlw5gvfij2by8tkev',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'ssihlw5gvfij2by8tkev',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'thumbnail': r're:https?://uploads.boxcast.com/(?:[\w-]+/){3}.+\.jpg$',
|
||||||
|
'release_date': '20230101',
|
||||||
|
'uploader_id': 'ds25vaazhlu4ygcvffid',
|
||||||
|
'release_timestamp': 1672543201,
|
||||||
|
'uploader': 'Lighthouse Ministries International - Beltsville, Maryland',
|
||||||
|
'description': 'md5:ac23e3d01b0b0be592e8f7fe0ec3a340',
|
||||||
|
'title': 'New Year\'s Eve CROSSOVER Service at LHMI | December 31, 2022',
|
||||||
|
}
|
||||||
|
}]
|
||||||
|
_WEBPAGE_TESTS = [{
|
||||||
|
'url': 'https://childrenshealthdefense.eu/live-stream/',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'da1eqqgkacngd5djlqld',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'thumbnail': r're:https?://uploads\.boxcast\.com/(?:[\w+-]+/){3}.+\.png$',
|
||||||
|
'title': 'In the Midst of Darkness Light Prevails: An Interdisciplinary Symposium',
|
||||||
|
'release_timestamp': 1670686812,
|
||||||
|
'release_date': '20221210',
|
||||||
|
'uploader_id': 're8w0v8hohhvpqtbskpe',
|
||||||
|
'uploader': 'Children\'s Health Defense',
|
||||||
|
}
|
||||||
|
}]
|
||||||
|
|
||||||
|
def _real_extract(self, url):
|
||||||
|
display_id = self._match_id(url)
|
||||||
|
webpage = self._download_webpage(url, display_id)
|
||||||
|
webpage_json_data = self._search_json(
|
||||||
|
r'var\s*BOXCAST_PRELOAD\s*=', webpage, 'broadcast data', display_id,
|
||||||
|
transform_source=js_to_json, default={})
|
||||||
|
|
||||||
|
# Ref: https://support.boxcast.com/en/articles/4235158-build-a-custom-viewer-experience-with-boxcast-api
|
||||||
|
broadcast_json_data = (
|
||||||
|
traverse_obj(webpage_json_data, ('broadcast', 'data'))
|
||||||
|
or self._download_json(f'https://api.boxcast.com/broadcasts/{display_id}', display_id))
|
||||||
|
view_json_data = (
|
||||||
|
traverse_obj(webpage_json_data, ('view', 'data'))
|
||||||
|
or self._download_json(f'https://api.boxcast.com/broadcasts/{display_id}/view',
|
||||||
|
display_id, fatal=False) or {})
|
||||||
|
|
||||||
|
formats, subtitles = [], {}
|
||||||
|
if view_json_data.get('status') == 'recorded':
|
||||||
|
formats, subtitles = self._extract_m3u8_formats_and_subtitles(
|
||||||
|
view_json_data['playlist'], display_id)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'id': str(broadcast_json_data['id']),
|
||||||
|
'title': (broadcast_json_data.get('name')
|
||||||
|
or self._html_search_meta(['og:title', 'twitter:title'], webpage)),
|
||||||
|
'description': (broadcast_json_data.get('description')
|
||||||
|
or self._html_search_meta(['og:description', 'twitter:description'], webpage)
|
||||||
|
or None),
|
||||||
|
'thumbnail': (broadcast_json_data.get('preview')
|
||||||
|
or self._html_search_meta(['og:image', 'twitter:image'], webpage)),
|
||||||
|
'formats': formats,
|
||||||
|
'subtitles': subtitles,
|
||||||
|
'release_timestamp': unified_timestamp(broadcast_json_data.get('streamed_at')),
|
||||||
|
'uploader': broadcast_json_data.get('account_name'),
|
||||||
|
'uploader_id': broadcast_json_data.get('account_id'),
|
||||||
|
}
|
318
yt_dlp/extractor/brainpop.py
Normal file
318
yt_dlp/extractor/brainpop.py
Normal file
@ -0,0 +1,318 @@
|
|||||||
|
import json
|
||||||
|
import re
|
||||||
|
|
||||||
|
from .common import InfoExtractor
|
||||||
|
from ..utils import (
|
||||||
|
classproperty,
|
||||||
|
int_or_none,
|
||||||
|
traverse_obj,
|
||||||
|
urljoin
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class BrainPOPBaseIE(InfoExtractor):
|
||||||
|
_NETRC_MACHINE = 'brainpop'
|
||||||
|
_ORIGIN = '' # So that _VALID_URL doesn't crash
|
||||||
|
_LOGIN_ERRORS = {
|
||||||
|
1502: 'The username and password you entered did not match.', # LOGIN_FAILED
|
||||||
|
1503: 'Payment method is expired.', # LOGIN_FAILED_ACCOUNT_NOT_ACTIVE
|
||||||
|
1506: 'Your BrainPOP plan has expired.', # LOGIN_FAILED_ACCOUNT_EXPIRED
|
||||||
|
1507: 'Terms not accepted.', # LOGIN_FAILED_TERMS_NOT_ACCEPTED
|
||||||
|
1508: 'Account not activated.', # LOGIN_FAILED_SUBSCRIPTION_NOT_ACTIVE
|
||||||
|
1512: 'The maximum number of devices permitted are logged in with your account right now.', # LOGIN_FAILED_LOGIN_LIMIT_REACHED
|
||||||
|
1513: 'You are trying to access your account from outside of its allowed IP range.', # LOGIN_FAILED_INVALID_IP
|
||||||
|
1514: 'Individual accounts are not included in your plan. Try again with your shared username and password.', # LOGIN_FAILED_MBP_DISABLED
|
||||||
|
1515: 'Account not activated.', # LOGIN_FAILED_TEACHER_NOT_ACTIVE
|
||||||
|
1523: 'That username and password won\'t work on this BrainPOP site.', # LOGIN_FAILED_NO_ACCESS
|
||||||
|
1524: 'You\'ll need to join a class before you can login.', # LOGIN_FAILED_STUDENT_NO_PERIOD
|
||||||
|
1526: 'Your account is locked. Reset your password, or ask a teacher or administrator for help.', # LOGIN_FAILED_ACCOUNT_LOCKED
|
||||||
|
}
|
||||||
|
|
||||||
|
@classproperty
|
||||||
|
def _VALID_URL(cls):
|
||||||
|
root = re.escape(cls._ORIGIN).replace(r'https:', r'https?:').replace(r'www\.', r'(?:www\.)?')
|
||||||
|
return rf'{root}/(?P<slug>[^/]+/[^/]+/(?P<id>[^/?#&]+))'
|
||||||
|
|
||||||
|
def _assemble_formats(self, slug, format_id, display_id, token='', extra_fields={}):
|
||||||
|
formats = []
|
||||||
|
formats = self._extract_m3u8_formats(
|
||||||
|
f'{urljoin(self._HLS_URL, slug)}.m3u8?{token}',
|
||||||
|
display_id, 'mp4', m3u8_id=f'{format_id}-hls', fatal=False)
|
||||||
|
formats.append({
|
||||||
|
'format_id': format_id,
|
||||||
|
'url': f'{urljoin(self._VIDEO_URL, slug)}?{token}',
|
||||||
|
})
|
||||||
|
for f in formats:
|
||||||
|
f.update(extra_fields)
|
||||||
|
return formats
|
||||||
|
|
||||||
|
def _extract_adaptive_formats(self, data, token, display_id, key_format='%s', extra_fields={}):
|
||||||
|
formats = []
|
||||||
|
additional_key_formats = {
|
||||||
|
'%s': {},
|
||||||
|
'ad_%s': {
|
||||||
|
'format_note': 'Audio description',
|
||||||
|
'source_preference': -2
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for additional_key_format, additional_key_fields in additional_key_formats.items():
|
||||||
|
for key_quality, key_index in enumerate(('high', 'low')):
|
||||||
|
full_key_index = additional_key_format % (key_format % key_index)
|
||||||
|
if data.get(full_key_index):
|
||||||
|
formats.extend(self._assemble_formats(data[full_key_index], full_key_index, display_id, token, {
|
||||||
|
'quality': -1 - key_quality,
|
||||||
|
**additional_key_fields,
|
||||||
|
**extra_fields
|
||||||
|
}))
|
||||||
|
return formats
|
||||||
|
|
||||||
|
def _perform_login(self, username, password):
|
||||||
|
login_res = self._download_json(
|
||||||
|
'https://api.brainpop.com/api/login', None,
|
||||||
|
data=json.dumps({'username': username, 'password': password}).encode(),
|
||||||
|
headers={
|
||||||
|
'Content-Type': 'application/json',
|
||||||
|
'Referer': self._ORIGIN
|
||||||
|
}, note='Logging in', errnote='Unable to log in', expected_status=400)
|
||||||
|
status_code = int_or_none(login_res['status_code'])
|
||||||
|
if status_code != 1505:
|
||||||
|
self.report_warning(
|
||||||
|
f'Unable to login: {self._LOGIN_ERRORS.get(status_code) or login_res.get("message")}'
|
||||||
|
or f'Got status code {status_code}')
|
||||||
|
|
||||||
|
|
||||||
|
class BrainPOPIE(BrainPOPBaseIE):
|
||||||
|
_ORIGIN = 'https://www.brainpop.com'
|
||||||
|
_VIDEO_URL = 'https://svideos.brainpop.com'
|
||||||
|
_HLS_URL = 'https://hls.brainpop.com'
|
||||||
|
_CDN_URL = 'https://cdn.brainpop.com'
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://www.brainpop.com/health/conflictresolution/martinlutherkingjr/movie?ref=null',
|
||||||
|
'md5': '3ead374233ae74c7f1b0029a01c972f0',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '1f3259fa457292b4',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'Martin Luther King, Jr.',
|
||||||
|
'display_id': 'martinlutherkingjr',
|
||||||
|
'description': 'md5:f403dbb2bf3ccc7cf4c59d9e43e3c349',
|
||||||
|
},
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.brainpop.com/science/space/bigbang/',
|
||||||
|
'md5': '9a1ff0e77444dd9e437354eb669c87ec',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'acae52cd48c99acf',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'Big Bang',
|
||||||
|
'display_id': 'bigbang',
|
||||||
|
'description': 'md5:3e53b766b0f116f631b13f4cae185d38',
|
||||||
|
},
|
||||||
|
'skip': 'Requires login',
|
||||||
|
}]
|
||||||
|
|
||||||
|
def _real_extract(self, url):
|
||||||
|
slug, display_id = self._match_valid_url(url).group('slug', 'id')
|
||||||
|
movie_data = self._download_json(
|
||||||
|
f'https://api.brainpop.com/api/content/published/bp/en/{slug}/movie?full=1', display_id,
|
||||||
|
'Downloading movie data JSON', 'Unable to download movie data')['data']
|
||||||
|
topic_data = traverse_obj(self._download_json(
|
||||||
|
f'https://api.brainpop.com/api/content/published/bp/en/{slug}?full=1', display_id,
|
||||||
|
'Downloading topic data JSON', 'Unable to download topic data', fatal=False),
|
||||||
|
('data', 'topic'), expected_type=dict) or movie_data['topic']
|
||||||
|
|
||||||
|
if not traverse_obj(movie_data, ('access', 'allow')):
|
||||||
|
reason = traverse_obj(movie_data, ('access', 'reason'))
|
||||||
|
if 'logged' in reason:
|
||||||
|
self.raise_login_required(reason, metadata_available=True)
|
||||||
|
else:
|
||||||
|
self.raise_no_formats(reason, video_id=display_id)
|
||||||
|
movie_feature = movie_data['feature']
|
||||||
|
movie_feature_data = movie_feature['data']
|
||||||
|
|
||||||
|
formats, subtitles = [], {}
|
||||||
|
formats.extend(self._extract_adaptive_formats(movie_feature_data, movie_feature_data.get('token', ''), display_id, '%s_v2', {
|
||||||
|
'language': movie_feature.get('language') or 'en',
|
||||||
|
'language_preference': 10
|
||||||
|
}))
|
||||||
|
for lang, localized_feature in traverse_obj(movie_feature, 'localization', default={}, expected_type=dict).items():
|
||||||
|
formats.extend(self._extract_adaptive_formats(localized_feature, localized_feature.get('token', ''), display_id, '%s_v2', {
|
||||||
|
'language': lang,
|
||||||
|
'language_preference': -10
|
||||||
|
}))
|
||||||
|
|
||||||
|
# TODO: Do localization fields also have subtitles?
|
||||||
|
for name, url in movie_feature_data.items():
|
||||||
|
lang = self._search_regex(
|
||||||
|
r'^subtitles_(?P<lang>\w+)$', name, 'subtitle metadata', default=None)
|
||||||
|
if lang and url:
|
||||||
|
subtitles.setdefault(lang, []).append({
|
||||||
|
'url': urljoin(self._CDN_URL, url)
|
||||||
|
})
|
||||||
|
|
||||||
|
return {
|
||||||
|
'id': topic_data['topic_id'],
|
||||||
|
'display_id': display_id,
|
||||||
|
'title': topic_data.get('name'),
|
||||||
|
'description': topic_data.get('synopsis'),
|
||||||
|
'formats': formats,
|
||||||
|
'subtitles': subtitles,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class BrainPOPLegacyBaseIE(BrainPOPBaseIE):
|
||||||
|
def _parse_js_topic_data(self, topic_data, display_id, token):
|
||||||
|
movie_data = topic_data['movies']
|
||||||
|
# TODO: Are there non-burned subtitles?
|
||||||
|
formats = self._extract_adaptive_formats(movie_data, token, display_id)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'id': topic_data['EntryID'],
|
||||||
|
'display_id': display_id,
|
||||||
|
'title': topic_data.get('name'),
|
||||||
|
'alt_title': topic_data.get('title'),
|
||||||
|
'description': topic_data.get('synopsis'),
|
||||||
|
'formats': formats,
|
||||||
|
}
|
||||||
|
|
||||||
|
def _real_extract(self, url):
|
||||||
|
slug, display_id = self._match_valid_url(url).group('slug', 'id')
|
||||||
|
webpage = self._download_webpage(url, display_id)
|
||||||
|
topic_data = self._search_json(
|
||||||
|
r'var\s+content\s*=\s*', webpage, 'content data',
|
||||||
|
display_id, end_pattern=';')['category']['unit']['topic']
|
||||||
|
token = self._search_regex(r'ec_token\s*:\s*[\'"]([^\'"]+)', webpage, 'video token')
|
||||||
|
return self._parse_js_topic_data(topic_data, display_id, token)
|
||||||
|
|
||||||
|
|
||||||
|
class BrainPOPJrIE(BrainPOPLegacyBaseIE):
|
||||||
|
_ORIGIN = 'https://jr.brainpop.com'
|
||||||
|
_VIDEO_URL = 'https://svideos-jr.brainpop.com'
|
||||||
|
_HLS_URL = 'https://hls-jr.brainpop.com'
|
||||||
|
_CDN_URL = 'https://cdn-jr.brainpop.com'
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://jr.brainpop.com/health/feelingsandsel/emotions/',
|
||||||
|
'md5': '04e0561bb21770f305a0ce6cf0d869ab',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '347',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'Emotions',
|
||||||
|
'display_id': 'emotions',
|
||||||
|
},
|
||||||
|
}, {
|
||||||
|
'url': 'https://jr.brainpop.com/science/habitats/arctichabitats/',
|
||||||
|
'md5': 'b0ed063bbd1910df00220ee29340f5d6',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '29',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'Arctic Habitats',
|
||||||
|
'display_id': 'arctichabitats',
|
||||||
|
},
|
||||||
|
'skip': 'Requires login',
|
||||||
|
}]
|
||||||
|
|
||||||
|
|
||||||
|
class BrainPOPELLIE(BrainPOPLegacyBaseIE):
|
||||||
|
_ORIGIN = 'https://ell.brainpop.com'
|
||||||
|
_VIDEO_URL = 'https://svideos-esl.brainpop.com'
|
||||||
|
_HLS_URL = 'https://hls-esl.brainpop.com'
|
||||||
|
_CDN_URL = 'https://cdn-esl.brainpop.com'
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://ell.brainpop.com/level1/unit1/lesson1/',
|
||||||
|
'md5': 'a2012700cfb774acb7ad2e8834eed0d0',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '1',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'Lesson 1',
|
||||||
|
'display_id': 'lesson1',
|
||||||
|
'alt_title': 'Personal Pronouns',
|
||||||
|
},
|
||||||
|
}, {
|
||||||
|
'url': 'https://ell.brainpop.com/level3/unit6/lesson5/',
|
||||||
|
'md5': 'be19c8292c87b24aacfb5fda2f3f8363',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '101',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'Lesson 5',
|
||||||
|
'display_id': 'lesson5',
|
||||||
|
'alt_title': 'Review: Unit 6',
|
||||||
|
},
|
||||||
|
'skip': 'Requires login',
|
||||||
|
}]
|
||||||
|
|
||||||
|
|
||||||
|
class BrainPOPEspIE(BrainPOPLegacyBaseIE):
|
||||||
|
IE_DESC = 'BrainPOP Español'
|
||||||
|
_ORIGIN = 'https://esp.brainpop.com'
|
||||||
|
_VIDEO_URL = 'https://svideos.brainpop.com'
|
||||||
|
_HLS_URL = 'https://hls.brainpop.com'
|
||||||
|
_CDN_URL = 'https://cdn.brainpop.com/mx'
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://esp.brainpop.com/ciencia/la_diversidad_de_la_vida/ecosistemas/',
|
||||||
|
'md5': 'cb3f062db2b3c5240ddfcfde7108f8c9',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '3893',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'Ecosistemas',
|
||||||
|
'display_id': 'ecosistemas',
|
||||||
|
'description': 'md5:80fc55b07e241f8c8f2aa8d74deaf3c3',
|
||||||
|
},
|
||||||
|
}, {
|
||||||
|
'url': 'https://esp.brainpop.com/espanol/la_escritura/emily_dickinson/',
|
||||||
|
'md5': '98c1b9559e0e33777209c425cda7dac4',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '7146',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'Emily Dickinson',
|
||||||
|
'display_id': 'emily_dickinson',
|
||||||
|
'description': 'md5:2795ad87b1d239c9711c1e92ab5a978b',
|
||||||
|
},
|
||||||
|
'skip': 'Requires login',
|
||||||
|
}]
|
||||||
|
|
||||||
|
|
||||||
|
class BrainPOPFrIE(BrainPOPLegacyBaseIE):
|
||||||
|
IE_DESC = 'BrainPOP Français'
|
||||||
|
_ORIGIN = 'https://fr.brainpop.com'
|
||||||
|
_VIDEO_URL = 'https://svideos.brainpop.com'
|
||||||
|
_HLS_URL = 'https://hls.brainpop.com'
|
||||||
|
_CDN_URL = 'https://cdn.brainpop.com/fr'
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://fr.brainpop.com/sciencesdelaterre/energie/sourcesdenergie/',
|
||||||
|
'md5': '97e7f48af8af93f8a2be11709f239371',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '1651',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'Sources d\'énergie',
|
||||||
|
'display_id': 'sourcesdenergie',
|
||||||
|
'description': 'md5:7eece350f019a21ef9f64d4088b2d857',
|
||||||
|
},
|
||||||
|
}, {
|
||||||
|
'url': 'https://fr.brainpop.com/francais/ecrire/plagiat/',
|
||||||
|
'md5': '0cf2b4f89804d0dd4a360a51310d445a',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '5803',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'Plagiat',
|
||||||
|
'display_id': 'plagiat',
|
||||||
|
'description': 'md5:4496d87127ace28e8b1eda116e77cd2b',
|
||||||
|
},
|
||||||
|
'skip': 'Requires login',
|
||||||
|
}]
|
||||||
|
|
||||||
|
|
||||||
|
class BrainPOPIlIE(BrainPOPLegacyBaseIE):
|
||||||
|
IE_DESC = 'BrainPOP Hebrew'
|
||||||
|
_ORIGIN = 'https://il.brainpop.com'
|
||||||
|
_VIDEO_URL = 'https://svideos.brainpop.com'
|
||||||
|
_HLS_URL = 'https://hls.brainpop.com'
|
||||||
|
_CDN_URL = 'https://cdn.brainpop.com/he'
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://il.brainpop.com/category_9/subcategory_150/subjects_3782/',
|
||||||
|
'md5': '9e4ea9dc60ecd385a6e5ca12ccf31641',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '3782',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'md5:e993632fcda0545d9205602ec314ad67',
|
||||||
|
'display_id': 'subjects_3782',
|
||||||
|
'description': 'md5:4cc084a8012beb01f037724423a4d4ed',
|
||||||
|
},
|
||||||
|
}]
|
@ -1,117 +1,189 @@
|
|||||||
import re
|
|
||||||
|
|
||||||
from .adobepass import AdobePassIE
|
from .adobepass import AdobePassIE
|
||||||
from ..utils import (
|
from ..utils import (
|
||||||
smuggle_url,
|
HEADRequest,
|
||||||
update_url_query,
|
extract_attributes,
|
||||||
int_or_none,
|
|
||||||
float_or_none,
|
float_or_none,
|
||||||
try_get,
|
get_element_html_by_class,
|
||||||
dict_get,
|
int_or_none,
|
||||||
|
merge_dicts,
|
||||||
|
parse_age_limit,
|
||||||
|
remove_end,
|
||||||
|
str_or_none,
|
||||||
|
traverse_obj,
|
||||||
|
unescapeHTML,
|
||||||
|
unified_timestamp,
|
||||||
|
update_url_query,
|
||||||
|
url_or_none,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class BravoTVIE(AdobePassIE):
|
class BravoTVIE(AdobePassIE):
|
||||||
_VALID_URL = r'https?://(?:www\.)?(?P<req_id>bravotv|oxygen)\.com/(?:[^/]+/)+(?P<id>[^/?#]+)'
|
_VALID_URL = r'https?://(?:www\.)?(?P<site>bravotv|oxygen)\.com/(?:[^/]+/)+(?P<id>[^/?#]+)'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'https://www.bravotv.com/top-chef/season-16/episode-15/videos/the-top-chef-season-16-winner-is',
|
'url': 'https://www.bravotv.com/top-chef/season-16/episode-15/videos/the-top-chef-season-16-winner-is',
|
||||||
'md5': 'e34684cfea2a96cd2ee1ef3a60909de9',
|
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': 'epL0pmK1kQlT',
|
'id': '3923059',
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
'title': 'The Top Chef Season 16 Winner Is...',
|
'title': 'The Top Chef Season 16 Winner Is...',
|
||||||
'description': 'Find out who takes the title of Top Chef!',
|
'description': 'Find out who takes the title of Top Chef!',
|
||||||
'uploader': 'NBCU-BRAV',
|
|
||||||
'upload_date': '20190314',
|
'upload_date': '20190314',
|
||||||
'timestamp': 1552591860,
|
'timestamp': 1552591860,
|
||||||
'season_number': 16,
|
'season_number': 16,
|
||||||
'episode_number': 15,
|
'episode_number': 15,
|
||||||
'series': 'Top Chef',
|
'series': 'Top Chef',
|
||||||
'episode': 'The Top Chef Season 16 Winner Is...',
|
'episode': 'The Top Chef Season 16 Winner Is...',
|
||||||
'duration': 190.0,
|
'duration': 190.357,
|
||||||
}
|
'season': 'Season 16',
|
||||||
|
'thumbnail': r're:^https://.+\.jpg',
|
||||||
|
},
|
||||||
|
'params': {'skip_download': 'm3u8'},
|
||||||
}, {
|
}, {
|
||||||
'url': 'http://www.bravotv.com/below-deck/season-3/ep-14-reunion-part-1',
|
'url': 'https://www.bravotv.com/top-chef/season-20/episode-1/london-calling',
|
||||||
'only_matching': True,
|
'info_dict': {
|
||||||
|
'id': '9000234570',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'London Calling',
|
||||||
|
'description': 'md5:5af95a8cbac1856bd10e7562f86bb759',
|
||||||
|
'upload_date': '20230310',
|
||||||
|
'timestamp': 1678410000,
|
||||||
|
'season_number': 20,
|
||||||
|
'episode_number': 1,
|
||||||
|
'series': 'Top Chef',
|
||||||
|
'episode': 'London Calling',
|
||||||
|
'duration': 3266.03,
|
||||||
|
'season': 'Season 20',
|
||||||
|
'chapters': 'count:7',
|
||||||
|
'thumbnail': r're:^https://.+\.jpg',
|
||||||
|
'age_limit': 14,
|
||||||
|
},
|
||||||
|
'params': {'skip_download': 'm3u8'},
|
||||||
|
'skip': 'This video requires AdobePass MSO credentials',
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.oxygen.com/in-ice-cold-blood/season-1/closing-night',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '3692045',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'Closing Night',
|
||||||
|
'description': 'md5:3170065c5c2f19548d72a4cbc254af63',
|
||||||
|
'upload_date': '20180401',
|
||||||
|
'timestamp': 1522623600,
|
||||||
|
'season_number': 1,
|
||||||
|
'episode_number': 1,
|
||||||
|
'series': 'In Ice Cold Blood',
|
||||||
|
'episode': 'Closing Night',
|
||||||
|
'duration': 2629.051,
|
||||||
|
'season': 'Season 1',
|
||||||
|
'chapters': 'count:6',
|
||||||
|
'thumbnail': r're:^https://.+\.jpg',
|
||||||
|
'age_limit': 14,
|
||||||
|
},
|
||||||
|
'params': {'skip_download': 'm3u8'},
|
||||||
|
'skip': 'This video requires AdobePass MSO credentials',
|
||||||
}, {
|
}, {
|
||||||
'url': 'https://www.oxygen.com/in-ice-cold-blood/season-2/episode-16/videos/handling-the-horwitz-house-after-the-murder-season-2',
|
'url': 'https://www.oxygen.com/in-ice-cold-blood/season-2/episode-16/videos/handling-the-horwitz-house-after-the-murder-season-2',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '3974019',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': '\'Handling The Horwitz House After The Murder (Season 2, Episode 16)',
|
||||||
|
'description': 'md5:f9d638dd6946a1c1c0533a9c6100eae5',
|
||||||
|
'upload_date': '20190617',
|
||||||
|
'timestamp': 1560790800,
|
||||||
|
'season_number': 2,
|
||||||
|
'episode_number': 16,
|
||||||
|
'series': 'In Ice Cold Blood',
|
||||||
|
'episode': '\'Handling The Horwitz House After The Murder (Season 2, Episode 16)',
|
||||||
|
'duration': 68.235,
|
||||||
|
'season': 'Season 2',
|
||||||
|
'thumbnail': r're:^https://.+\.jpg',
|
||||||
|
'age_limit': 14,
|
||||||
|
},
|
||||||
|
'params': {'skip_download': 'm3u8'},
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.bravotv.com/below-deck/season-3/ep-14-reunion-part-1',
|
||||||
'only_matching': True,
|
'only_matching': True,
|
||||||
}]
|
}]
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
site, display_id = self._match_valid_url(url).groups()
|
site, display_id = self._match_valid_url(url).group('site', 'id')
|
||||||
webpage = self._download_webpage(url, display_id)
|
webpage = self._download_webpage(url, display_id)
|
||||||
settings = self._parse_json(self._search_regex(
|
settings = self._search_json(
|
||||||
r'<script[^>]+data-drupal-selector="drupal-settings-json"[^>]*>({.+?})</script>', webpage, 'drupal settings'),
|
r'<script[^>]+data-drupal-selector="drupal-settings-json"[^>]*>', webpage, 'settings', display_id)
|
||||||
display_id)
|
tve = extract_attributes(get_element_html_by_class('tve-video-deck-app', webpage) or '')
|
||||||
info = {}
|
|
||||||
query = {
|
query = {
|
||||||
'mbr': 'true',
|
'manifest': 'm3u',
|
||||||
|
'formats': 'm3u,mpeg4',
|
||||||
}
|
}
|
||||||
account_pid, release_pid = [None] * 2
|
|
||||||
tve = settings.get('ls_tve')
|
|
||||||
if tve:
|
if tve:
|
||||||
query['manifest'] = 'm3u'
|
account_pid = tve.get('data-mpx-media-account-pid') or 'HNK2IC'
|
||||||
mobj = re.search(r'<[^>]+id="pdk-player"[^>]+data-url=["\']?(?:https?:)?//player\.theplatform\.com/p/([^/]+)/(?:[^/]+/)*select/([^?#&"\']+)', webpage)
|
account_id = tve['data-mpx-media-account-id']
|
||||||
if mobj:
|
metadata = self._parse_json(
|
||||||
account_pid, tp_path = mobj.groups()
|
tve.get('data-normalized-video', ''), display_id, fatal=False, transform_source=unescapeHTML)
|
||||||
release_pid = tp_path.strip('/').split('/')[-1]
|
video_id = tve.get('data-guid') or metadata['guid']
|
||||||
else:
|
if tve.get('data-entitlement') == 'auth':
|
||||||
account_pid = 'HNK2IC'
|
auth = traverse_obj(settings, ('tve_adobe_auth', {dict})) or {}
|
||||||
tp_path = release_pid = tve['release_pid']
|
site = remove_end(site, 'tv')
|
||||||
if tve.get('entitlement') == 'auth':
|
release_pid = tve['data-release-pid']
|
||||||
adobe_pass = settings.get('tve_adobe_auth', {})
|
|
||||||
if site == 'bravotv':
|
|
||||||
site = 'bravo'
|
|
||||||
resource = self._get_mvpd_resource(
|
resource = self._get_mvpd_resource(
|
||||||
adobe_pass.get('adobePassResourceId') or site,
|
tve.get('data-adobe-pass-resource-id') or auth.get('adobePassResourceId') or site,
|
||||||
tve['title'], release_pid, tve.get('rating'))
|
tve['data-title'], release_pid, tve.get('data-rating'))
|
||||||
query['auth'] = self._extract_mvpd_auth(
|
query.update({
|
||||||
url, release_pid,
|
'switch': 'HLSServiceSecure',
|
||||||
adobe_pass.get('adobePassRequestorId') or site, resource)
|
'auth': self._extract_mvpd_auth(
|
||||||
|
url, release_pid, auth.get('adobePassRequestorId') or site, resource),
|
||||||
|
})
|
||||||
|
|
||||||
else:
|
else:
|
||||||
shared_playlist = settings['ls_playlist']
|
ls_playlist = traverse_obj(settings, ('ls_playlist', ..., {dict}), get_all=False) or {}
|
||||||
account_pid = shared_playlist['account_pid']
|
account_pid = ls_playlist.get('mpxMediaAccountPid') or 'PHSl-B'
|
||||||
metadata = shared_playlist['video_metadata'][shared_playlist['default_clip']]
|
account_id = ls_playlist['mpxMediaAccountId']
|
||||||
tp_path = release_pid = metadata.get('release_pid')
|
video_id = ls_playlist['defaultGuid']
|
||||||
if not release_pid:
|
metadata = traverse_obj(
|
||||||
release_pid = metadata['guid']
|
ls_playlist, ('videos', lambda _, v: v['guid'] == video_id, {dict}), get_all=False)
|
||||||
tp_path = 'media/guid/2140479951/' + release_pid
|
|
||||||
info.update({
|
|
||||||
'title': metadata['title'],
|
|
||||||
'description': metadata.get('description'),
|
|
||||||
'season_number': int_or_none(metadata.get('season_num')),
|
|
||||||
'episode_number': int_or_none(metadata.get('episode_num')),
|
|
||||||
})
|
|
||||||
query['switch'] = 'progressive'
|
|
||||||
|
|
||||||
tp_url = 'http://link.theplatform.com/s/%s/%s' % (account_pid, tp_path)
|
|
||||||
|
|
||||||
|
tp_url = f'https://link.theplatform.com/s/{account_pid}/media/guid/{account_id}/{video_id}'
|
||||||
tp_metadata = self._download_json(
|
tp_metadata = self._download_json(
|
||||||
update_url_query(tp_url, {'format': 'preview'}),
|
update_url_query(tp_url, {'format': 'preview'}), video_id, fatal=False)
|
||||||
display_id, fatal=False)
|
|
||||||
if tp_metadata:
|
|
||||||
info.update({
|
|
||||||
'title': tp_metadata.get('title'),
|
|
||||||
'description': tp_metadata.get('description'),
|
|
||||||
'duration': float_or_none(tp_metadata.get('duration'), 1000),
|
|
||||||
'season_number': int_or_none(
|
|
||||||
dict_get(tp_metadata, ('pl1$seasonNumber', 'nbcu$seasonNumber'))),
|
|
||||||
'episode_number': int_or_none(
|
|
||||||
dict_get(tp_metadata, ('pl1$episodeNumber', 'nbcu$episodeNumber'))),
|
|
||||||
# For some reason the series is sometimes wrapped into a single element array.
|
|
||||||
'series': try_get(
|
|
||||||
dict_get(tp_metadata, ('pl1$show', 'nbcu$show')),
|
|
||||||
lambda x: x[0] if isinstance(x, list) else x,
|
|
||||||
expected_type=str),
|
|
||||||
'episode': dict_get(
|
|
||||||
tp_metadata, ('pl1$episodeName', 'nbcu$episodeName', 'title')),
|
|
||||||
})
|
|
||||||
|
|
||||||
info.update({
|
seconds_or_none = lambda x: float_or_none(x, 1000)
|
||||||
'_type': 'url_transparent',
|
chapters = traverse_obj(tp_metadata, ('chapters', ..., {
|
||||||
'id': release_pid,
|
'start_time': ('startTime', {seconds_or_none}),
|
||||||
'url': smuggle_url(update_url_query(tp_url, query), {'force_smil_url': True}),
|
'end_time': ('endTime', {seconds_or_none}),
|
||||||
'ie_key': 'ThePlatform',
|
}))
|
||||||
})
|
# prune pointless single chapters that span the entire duration from short videos
|
||||||
return info
|
if len(chapters) == 1 and not traverse_obj(chapters, (0, 'end_time')):
|
||||||
|
chapters = None
|
||||||
|
|
||||||
|
m3u8_url = self._request_webpage(HEADRequest(
|
||||||
|
update_url_query(f'{tp_url}/stream.m3u8', query)), video_id, 'Checking m3u8 URL').geturl()
|
||||||
|
if 'mpeg_cenc' in m3u8_url:
|
||||||
|
self.report_drm(video_id)
|
||||||
|
formats, subtitles = self._extract_m3u8_formats_and_subtitles(m3u8_url, video_id, 'mp4', m3u8_id='hls')
|
||||||
|
|
||||||
|
return {
|
||||||
|
'id': video_id,
|
||||||
|
'formats': formats,
|
||||||
|
'subtitles': subtitles,
|
||||||
|
'chapters': chapters,
|
||||||
|
**merge_dicts(traverse_obj(tp_metadata, {
|
||||||
|
'title': 'title',
|
||||||
|
'description': 'description',
|
||||||
|
'duration': ('duration', {seconds_or_none}),
|
||||||
|
'timestamp': ('pubDate', {seconds_or_none}),
|
||||||
|
'season_number': (('pl1$seasonNumber', 'nbcu$seasonNumber'), {int_or_none}),
|
||||||
|
'episode_number': (('pl1$episodeNumber', 'nbcu$episodeNumber'), {int_or_none}),
|
||||||
|
'series': (('pl1$show', 'nbcu$show'), (None, ...), {str}),
|
||||||
|
'episode': (('title', 'pl1$episodeNumber', 'nbcu$episodeNumber'), {str_or_none}),
|
||||||
|
'age_limit': ('ratings', ..., 'rating', {parse_age_limit}),
|
||||||
|
}, get_all=False), traverse_obj(metadata, {
|
||||||
|
'title': 'title',
|
||||||
|
'description': 'description',
|
||||||
|
'duration': ('durationInSeconds', {int_or_none}),
|
||||||
|
'timestamp': ('airDate', {unified_timestamp}),
|
||||||
|
'thumbnail': ('thumbnailUrl', {url_or_none}),
|
||||||
|
'season_number': ('seasonNumber', {int_or_none}),
|
||||||
|
'episode_number': ('episodeNumber', {int_or_none}),
|
||||||
|
'episode': 'episodeTitle',
|
||||||
|
'series': 'show',
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
@ -575,6 +575,7 @@ def build_format_id(kind):
|
|||||||
self.raise_no_formats(
|
self.raise_no_formats(
|
||||||
error.get('message') or error.get('error_subcode') or error['error_code'], expected=True)
|
error.get('message') or error.get('error_subcode') or error['error_code'], expected=True)
|
||||||
|
|
||||||
|
headers.pop('Authorization', None) # or else http formats will give error 400
|
||||||
for f in formats:
|
for f in formats:
|
||||||
f.setdefault('http_headers', {}).update(headers)
|
f.setdefault('http_headers', {}).update(headers)
|
||||||
|
|
||||||
@ -895,8 +896,9 @@ def extract_policy_key():
|
|||||||
store_pk(policy_key)
|
store_pk(policy_key)
|
||||||
return policy_key
|
return policy_key
|
||||||
|
|
||||||
api_url = 'https://edge.api.brightcove.com/playback/v1/accounts/%s/%ss/%s' % (account_id, content_type, video_id)
|
token = smuggled_data.get('token')
|
||||||
headers = {}
|
api_url = f'https://{"edge-auth" if token else "edge"}.api.brightcove.com/playback/v1/accounts/{account_id}/{content_type}s/{video_id}'
|
||||||
|
headers = {'Authorization': f'Bearer {token}'} if token else {}
|
||||||
referrer = smuggled_data.get('referrer') # XXX: notice the spelling/case of the key
|
referrer = smuggled_data.get('referrer') # XXX: notice the spelling/case of the key
|
||||||
if referrer:
|
if referrer:
|
||||||
headers.update({
|
headers.update({
|
||||||
|
@ -1,9 +1,5 @@
|
|||||||
from .common import InfoExtractor
|
from .common import InfoExtractor
|
||||||
from ..utils import (
|
from ..utils import float_or_none, int_or_none, make_archive_id, traverse_obj
|
||||||
traverse_obj,
|
|
||||||
float_or_none,
|
|
||||||
int_or_none
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class CallinIE(InfoExtractor):
|
class CallinIE(InfoExtractor):
|
||||||
@ -35,6 +31,54 @@ class CallinIE(InfoExtractor):
|
|||||||
'episode_number': 1,
|
'episode_number': 1,
|
||||||
'episode_id': '218b979630a35ead12c6fd096f2996c56c37e4d0dc1f6dc0feada32dcf7b31cd'
|
'episode_id': '218b979630a35ead12c6fd096f2996c56c37e4d0dc1f6dc0feada32dcf7b31cd'
|
||||||
}
|
}
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.callin.com/episode/fcc-commissioner-brendan-carr-on-elons-PrumRdSQJW',
|
||||||
|
'md5': '14ede27ee2c957b7e4db93140fc0745c',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'c3dab47f237bf953d180d3f243477a84302798be0e0b29bc9ade6d60a69f04f5',
|
||||||
|
'ext': 'ts',
|
||||||
|
'title': 'FCC Commissioner Brendan Carr on Elon’s Starlink',
|
||||||
|
'description': 'Or, why the government doesn’t like SpaceX',
|
||||||
|
'channel': 'The Pull Request',
|
||||||
|
'channel_url': 'https://callin.com/show/the-pull-request-ucnDJmEKAa',
|
||||||
|
'duration': 3182.472,
|
||||||
|
'series_id': '7e9c23156e4aecfdcaef46bfb2ed7ca268509622ec006c0f0f25d90e34496638',
|
||||||
|
'uploader_url': 'http://thepullrequest.com',
|
||||||
|
'upload_date': '20220902',
|
||||||
|
'episode': 'FCC Commissioner Brendan Carr on Elon’s Starlink',
|
||||||
|
'display_id': 'fcc-commissioner-brendan-carr-on-elons-PrumRdSQJW',
|
||||||
|
'series': 'The Pull Request',
|
||||||
|
'channel_id': '7e9c23156e4aecfdcaef46bfb2ed7ca268509622ec006c0f0f25d90e34496638',
|
||||||
|
'view_count': int,
|
||||||
|
'uploader': 'Antonio García Martínez',
|
||||||
|
'thumbnail': 'https://d1z76fhpoqkd01.cloudfront.net/shows/legacy/1ade9142625344045dc17cf523469ced1d93610762f4c886d06aa190a2f979e8.png',
|
||||||
|
'episode_id': 'c3dab47f237bf953d180d3f243477a84302798be0e0b29bc9ade6d60a69f04f5',
|
||||||
|
'timestamp': 1662100688.005,
|
||||||
|
}
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.callin.com/episode/episode-81-elites-melt-down-over-student-debt-lzxMidUnjA',
|
||||||
|
'md5': '16f704ddbf82a27e3930533b12062f07',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '8d06f869798f93a7814e380bceabea72d501417e620180416ff6bd510596e83c',
|
||||||
|
'ext': 'ts',
|
||||||
|
'title': 'Episode 81- Elites MELT DOWN over Student Debt Victory? Rumble in NYC?',
|
||||||
|
'description': 'Let’s talk todays episode about the primary election shake up in NYC and the elites melting down over student debt cancelation.',
|
||||||
|
'channel': 'The DEBRIEF With Briahna Joy Gray',
|
||||||
|
'channel_url': 'https://callin.com/show/the-debrief-with-briahna-joy-gray-siiFDzGegm',
|
||||||
|
'duration': 10043.16,
|
||||||
|
'series_id': '61cea58444465fd26674069703bd8322993bc9e5b4f1a6d0872690554a046ff7',
|
||||||
|
'uploader_url': 'http://patreon.com/badfaithpodcast',
|
||||||
|
'upload_date': '20220826',
|
||||||
|
'episode': 'Episode 81- Elites MELT DOWN over Student Debt Victory? Rumble in NYC?',
|
||||||
|
'display_id': 'episode-',
|
||||||
|
'series': 'The DEBRIEF With Briahna Joy Gray',
|
||||||
|
'channel_id': '61cea58444465fd26674069703bd8322993bc9e5b4f1a6d0872690554a046ff7',
|
||||||
|
'view_count': int,
|
||||||
|
'uploader': 'Briahna Gray',
|
||||||
|
'thumbnail': 'https://d1z76fhpoqkd01.cloudfront.net/shows/legacy/461ea0d86172cb6aff7d6c80fd49259cf5e64bdf737a4650f8bc24cf392ca218.png',
|
||||||
|
'episode_id': '8d06f869798f93a7814e380bceabea72d501417e620180416ff6bd510596e83c',
|
||||||
|
'timestamp': 1661476708.282,
|
||||||
|
}
|
||||||
}]
|
}]
|
||||||
|
|
||||||
def try_get_user_name(self, d):
|
def try_get_user_name(self, d):
|
||||||
@ -86,6 +130,7 @@ def _real_extract(self, url):
|
|||||||
|
|
||||||
return {
|
return {
|
||||||
'id': id,
|
'id': id,
|
||||||
|
'_old_archive_ids': [make_archive_id(self, display_id.rsplit('-', 1)[-1])],
|
||||||
'display_id': display_id,
|
'display_id': display_id,
|
||||||
'title': title,
|
'title': title,
|
||||||
'formats': formats,
|
'formats': formats,
|
||||||
|
85
yt_dlp/extractor/camfm.py
Normal file
85
yt_dlp/extractor/camfm.py
Normal file
@ -0,0 +1,85 @@
|
|||||||
|
import re
|
||||||
|
|
||||||
|
from .common import InfoExtractor
|
||||||
|
from ..utils import (
|
||||||
|
clean_html,
|
||||||
|
get_element_by_class,
|
||||||
|
get_elements_by_class,
|
||||||
|
join_nonempty,
|
||||||
|
traverse_obj,
|
||||||
|
unified_timestamp,
|
||||||
|
urljoin,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class CamFMShowIE(InfoExtractor):
|
||||||
|
_VALID_URL = r'https://(?:www\.)?camfm\.co\.uk/shows/(?P<id>[^/]+)'
|
||||||
|
_TESTS = [{
|
||||||
|
'playlist_mincount': 5,
|
||||||
|
'url': 'https://camfm.co.uk/shows/soul-mining/',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'soul-mining',
|
||||||
|
'thumbnail': 'md5:6a873091f92c936f23bdcce80f75e66a',
|
||||||
|
'title': 'Soul Mining',
|
||||||
|
'description': 'Telling the stories of jazz, funk and soul from all corners of the world.',
|
||||||
|
},
|
||||||
|
}]
|
||||||
|
|
||||||
|
def _real_extract(self, url):
|
||||||
|
show_id = self._match_id(url)
|
||||||
|
page = self._download_webpage(url, show_id)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'_type': 'playlist',
|
||||||
|
'id': show_id,
|
||||||
|
'entries': [self.url_result(urljoin('https://camfm.co.uk', i), CamFMEpisodeIE)
|
||||||
|
for i in re.findall(r"javascript:popup\('(/player/[^']+)', 'listen'", page)],
|
||||||
|
'thumbnail': urljoin('https://camfm.co.uk', self._search_regex(
|
||||||
|
r'<img[^>]+class="thumb-expand"[^>]+src="([^"]+)"', page, 'thumbnail', fatal=False)),
|
||||||
|
'title': self._html_search_regex('<h1>([^<]+)</h1>', page, 'title', fatal=False),
|
||||||
|
'description': clean_html(get_element_by_class('small-12 medium-8 cell', page))
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class CamFMEpisodeIE(InfoExtractor):
|
||||||
|
_VALID_URL = r'https://(?:www\.)?camfm\.co\.uk/player/(?P<id>[^/]+)'
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://camfm.co.uk/player/43336',
|
||||||
|
'skip': 'Episode will expire - don\'t actually know when, but it will go eventually',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '43336',
|
||||||
|
'title': 'AITAA: Am I the Agony Aunt? - 19:00 Tue 16/05/2023',
|
||||||
|
'ext': 'mp3',
|
||||||
|
'upload_date': '20230516',
|
||||||
|
'description': 'md5:f165144f94927c0f1bfa2ee6e6ab7bbf',
|
||||||
|
'timestamp': 1684263600,
|
||||||
|
'series': 'AITAA: Am I the Agony Aunt?',
|
||||||
|
'thumbnail': 'md5:5980a831360d0744c3764551be3d09c1',
|
||||||
|
'categories': ['Entertainment'],
|
||||||
|
}
|
||||||
|
}]
|
||||||
|
|
||||||
|
def _real_extract(self, url):
|
||||||
|
episode_id = self._match_id(url)
|
||||||
|
page = self._download_webpage(url, episode_id)
|
||||||
|
audios = self._parse_html5_media_entries('https://audio.camfm.co.uk', page, episode_id)
|
||||||
|
|
||||||
|
caption = get_element_by_class('caption', page)
|
||||||
|
series = clean_html(re.sub(r'<span[^<]+<[^<]+>', '', caption))
|
||||||
|
|
||||||
|
card_section = get_element_by_class('card-section', page)
|
||||||
|
date = self._html_search_regex('>Aired at ([^<]+)<', card_section, 'air date', fatal=False)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'id': episode_id,
|
||||||
|
'title': join_nonempty(series, date, delim=' - '),
|
||||||
|
'formats': traverse_obj(audios, (..., 'formats', ...)),
|
||||||
|
'timestamp': unified_timestamp(date), # XXX: Does not account for UK's daylight savings
|
||||||
|
'series': series,
|
||||||
|
'description': clean_html(re.sub(r'<b>[^<]+</b><br[^>]+/>', '', card_section)),
|
||||||
|
'thumbnail': urljoin('https://camfm.co.uk', self._search_regex(
|
||||||
|
r'<div[^>]+class="cover-art"[^>]+style="[^"]+url\(\'([^\']+)',
|
||||||
|
page, 'thumbnail', fatal=False)),
|
||||||
|
'categories': get_elements_by_class('label', caption),
|
||||||
|
'was_live': True,
|
||||||
|
}
|
@ -1,9 +1,5 @@
|
|||||||
from .common import InfoExtractor
|
from .common import InfoExtractor
|
||||||
from ..utils import (
|
from ..utils import int_or_none, url_or_none
|
||||||
ExtractorError,
|
|
||||||
int_or_none,
|
|
||||||
url_or_none,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class CamModelsIE(InfoExtractor):
|
class CamModelsIE(InfoExtractor):
|
||||||
@ -17,32 +13,11 @@ class CamModelsIE(InfoExtractor):
|
|||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
user_id = self._match_id(url)
|
user_id = self._match_id(url)
|
||||||
|
|
||||||
webpage = self._download_webpage(
|
|
||||||
url, user_id, headers=self.geo_verification_headers())
|
|
||||||
|
|
||||||
manifest_root = self._html_search_regex(
|
|
||||||
r'manifestUrlRoot=([^&\']+)', webpage, 'manifest', default=None)
|
|
||||||
|
|
||||||
if not manifest_root:
|
|
||||||
ERRORS = (
|
|
||||||
("I'm offline, but let's stay connected", 'This user is currently offline'),
|
|
||||||
('in a private show', 'This user is in a private show'),
|
|
||||||
('is currently performing LIVE', 'This model is currently performing live'),
|
|
||||||
)
|
|
||||||
for pattern, message in ERRORS:
|
|
||||||
if pattern in webpage:
|
|
||||||
error = message
|
|
||||||
expected = True
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
error = 'Unable to find manifest URL root'
|
|
||||||
expected = False
|
|
||||||
raise ExtractorError(error, expected=expected)
|
|
||||||
|
|
||||||
manifest = self._download_json(
|
manifest = self._download_json(
|
||||||
'%s%s.json' % (manifest_root, user_id), user_id)
|
'https://manifest-server.naiadsystems.com/live/s:%s.json' % user_id, user_id)
|
||||||
|
|
||||||
formats = []
|
formats = []
|
||||||
|
thumbnails = []
|
||||||
for format_id, format_dict in manifest['formats'].items():
|
for format_id, format_dict in manifest['formats'].items():
|
||||||
if not isinstance(format_dict, dict):
|
if not isinstance(format_dict, dict):
|
||||||
continue
|
continue
|
||||||
@ -82,12 +57,20 @@ def _real_extract(self, url):
|
|||||||
'quality': -10,
|
'quality': -10,
|
||||||
})
|
})
|
||||||
else:
|
else:
|
||||||
|
if format_id == 'jpeg':
|
||||||
|
thumbnails.append({
|
||||||
|
'url': f['url'],
|
||||||
|
'width': f['width'],
|
||||||
|
'height': f['height'],
|
||||||
|
'format_id': f['format_id'],
|
||||||
|
})
|
||||||
continue
|
continue
|
||||||
formats.append(f)
|
formats.append(f)
|
||||||
|
|
||||||
return {
|
return {
|
||||||
'id': user_id,
|
'id': user_id,
|
||||||
'title': user_id,
|
'title': user_id,
|
||||||
|
'thumbnails': thumbnails,
|
||||||
'is_live': True,
|
'is_live': True,
|
||||||
'formats': formats,
|
'formats': formats,
|
||||||
'age_limit': 18
|
'age_limit': 18
|
||||||
|
@ -1,383 +0,0 @@
|
|||||||
import json
|
|
||||||
|
|
||||||
|
|
||||||
from .common import InfoExtractor
|
|
||||||
from .gigya import GigyaBaseIE
|
|
||||||
from ..compat import compat_HTTPError
|
|
||||||
from ..utils import (
|
|
||||||
ExtractorError,
|
|
||||||
clean_html,
|
|
||||||
extract_attributes,
|
|
||||||
float_or_none,
|
|
||||||
get_element_by_class,
|
|
||||||
int_or_none,
|
|
||||||
merge_dicts,
|
|
||||||
str_or_none,
|
|
||||||
strip_or_none,
|
|
||||||
url_or_none,
|
|
||||||
urlencode_postdata
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class CanvasIE(InfoExtractor):
|
|
||||||
_VALID_URL = r'https?://mediazone\.vrt\.be/api/v1/(?P<site_id>canvas|een|ketnet|vrt(?:video|nieuws)|sporza|dako)/assets/(?P<id>[^/?#&]+)'
|
|
||||||
_TESTS = [{
|
|
||||||
'url': 'https://mediazone.vrt.be/api/v1/ketnet/assets/md-ast-4ac54990-ce66-4d00-a8ca-9eac86f4c475',
|
|
||||||
'md5': '37b2b7bb9b3dcaa05b67058dc3a714a9',
|
|
||||||
'info_dict': {
|
|
||||||
'id': 'md-ast-4ac54990-ce66-4d00-a8ca-9eac86f4c475',
|
|
||||||
'display_id': 'md-ast-4ac54990-ce66-4d00-a8ca-9eac86f4c475',
|
|
||||||
'ext': 'mp4',
|
|
||||||
'title': 'Nachtwacht: De Greystook',
|
|
||||||
'description': 'Nachtwacht: De Greystook',
|
|
||||||
'thumbnail': r're:^https?://.*\.jpg$',
|
|
||||||
'duration': 1468.02,
|
|
||||||
},
|
|
||||||
'expected_warnings': ['is not a supported codec'],
|
|
||||||
}, {
|
|
||||||
'url': 'https://mediazone.vrt.be/api/v1/canvas/assets/mz-ast-5e5f90b6-2d72-4c40-82c2-e134f884e93e',
|
|
||||||
'only_matching': True,
|
|
||||||
}]
|
|
||||||
_GEO_BYPASS = False
|
|
||||||
_HLS_ENTRY_PROTOCOLS_MAP = {
|
|
||||||
'HLS': 'm3u8_native',
|
|
||||||
'HLS_AES': 'm3u8_native',
|
|
||||||
}
|
|
||||||
_REST_API_BASE = 'https://media-services-public.vrt.be/vualto-video-aggregator-web/rest/external/v2'
|
|
||||||
|
|
||||||
def _real_extract(self, url):
|
|
||||||
mobj = self._match_valid_url(url)
|
|
||||||
site_id, video_id = mobj.group('site_id'), mobj.group('id')
|
|
||||||
|
|
||||||
data = None
|
|
||||||
if site_id != 'vrtvideo':
|
|
||||||
# Old API endpoint, serves more formats but may fail for some videos
|
|
||||||
data = self._download_json(
|
|
||||||
'https://mediazone.vrt.be/api/v1/%s/assets/%s'
|
|
||||||
% (site_id, video_id), video_id, 'Downloading asset JSON',
|
|
||||||
'Unable to download asset JSON', fatal=False)
|
|
||||||
|
|
||||||
# New API endpoint
|
|
||||||
if not data:
|
|
||||||
vrtnutoken = self._download_json('https://token.vrt.be/refreshtoken',
|
|
||||||
video_id, note='refreshtoken: Retrieve vrtnutoken',
|
|
||||||
errnote='refreshtoken failed')['vrtnutoken']
|
|
||||||
headers = self.geo_verification_headers()
|
|
||||||
headers.update({'Content-Type': 'application/json; charset=utf-8'})
|
|
||||||
vrtPlayerToken = self._download_json(
|
|
||||||
'%s/tokens' % self._REST_API_BASE, video_id,
|
|
||||||
'Downloading token', headers=headers, data=json.dumps({
|
|
||||||
'identityToken': vrtnutoken
|
|
||||||
}).encode('utf-8'))['vrtPlayerToken']
|
|
||||||
data = self._download_json(
|
|
||||||
'%s/videos/%s' % (self._REST_API_BASE, video_id),
|
|
||||||
video_id, 'Downloading video JSON', query={
|
|
||||||
'vrtPlayerToken': vrtPlayerToken,
|
|
||||||
'client': 'null',
|
|
||||||
}, expected_status=400)
|
|
||||||
if 'title' not in data:
|
|
||||||
code = data.get('code')
|
|
||||||
if code == 'AUTHENTICATION_REQUIRED':
|
|
||||||
self.raise_login_required()
|
|
||||||
elif code == 'INVALID_LOCATION':
|
|
||||||
self.raise_geo_restricted(countries=['BE'])
|
|
||||||
raise ExtractorError(data.get('message') or code, expected=True)
|
|
||||||
|
|
||||||
# Note: The title may be an empty string
|
|
||||||
title = data['title'] or f'{site_id} {video_id}'
|
|
||||||
description = data.get('description')
|
|
||||||
|
|
||||||
formats = []
|
|
||||||
subtitles = {}
|
|
||||||
for target in data['targetUrls']:
|
|
||||||
format_url, format_type = url_or_none(target.get('url')), str_or_none(target.get('type'))
|
|
||||||
if not format_url or not format_type:
|
|
||||||
continue
|
|
||||||
format_type = format_type.upper()
|
|
||||||
if format_type in self._HLS_ENTRY_PROTOCOLS_MAP:
|
|
||||||
fmts, subs = self._extract_m3u8_formats_and_subtitles(
|
|
||||||
format_url, video_id, 'mp4', self._HLS_ENTRY_PROTOCOLS_MAP[format_type],
|
|
||||||
m3u8_id=format_type, fatal=False)
|
|
||||||
formats.extend(fmts)
|
|
||||||
subtitles = self._merge_subtitles(subtitles, subs)
|
|
||||||
elif format_type == 'HDS':
|
|
||||||
formats.extend(self._extract_f4m_formats(
|
|
||||||
format_url, video_id, f4m_id=format_type, fatal=False))
|
|
||||||
elif format_type == 'MPEG_DASH':
|
|
||||||
fmts, subs = self._extract_mpd_formats_and_subtitles(
|
|
||||||
format_url, video_id, mpd_id=format_type, fatal=False)
|
|
||||||
formats.extend(fmts)
|
|
||||||
subtitles = self._merge_subtitles(subtitles, subs)
|
|
||||||
elif format_type == 'HSS':
|
|
||||||
fmts, subs = self._extract_ism_formats_and_subtitles(
|
|
||||||
format_url, video_id, ism_id='mss', fatal=False)
|
|
||||||
formats.extend(fmts)
|
|
||||||
subtitles = self._merge_subtitles(subtitles, subs)
|
|
||||||
else:
|
|
||||||
formats.append({
|
|
||||||
'format_id': format_type,
|
|
||||||
'url': format_url,
|
|
||||||
})
|
|
||||||
|
|
||||||
subtitle_urls = data.get('subtitleUrls')
|
|
||||||
if isinstance(subtitle_urls, list):
|
|
||||||
for subtitle in subtitle_urls:
|
|
||||||
subtitle_url = subtitle.get('url')
|
|
||||||
if subtitle_url and subtitle.get('type') == 'CLOSED':
|
|
||||||
subtitles.setdefault('nl', []).append({'url': subtitle_url})
|
|
||||||
|
|
||||||
return {
|
|
||||||
'id': video_id,
|
|
||||||
'display_id': video_id,
|
|
||||||
'title': title,
|
|
||||||
'description': description,
|
|
||||||
'formats': formats,
|
|
||||||
'duration': float_or_none(data.get('duration'), 1000),
|
|
||||||
'thumbnail': data.get('posterImageUrl'),
|
|
||||||
'subtitles': subtitles,
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
class CanvasEenIE(InfoExtractor):
|
|
||||||
IE_DESC = 'canvas.be and een.be'
|
|
||||||
_VALID_URL = r'https?://(?:www\.)?(?P<site_id>canvas|een)\.be/(?:[^/]+/)*(?P<id>[^/?#&]+)'
|
|
||||||
_TESTS = [{
|
|
||||||
'url': 'http://www.canvas.be/video/de-afspraak/najaar-2015/de-afspraak-veilt-voor-de-warmste-week',
|
|
||||||
'md5': 'ed66976748d12350b118455979cca293',
|
|
||||||
'info_dict': {
|
|
||||||
'id': 'mz-ast-5e5f90b6-2d72-4c40-82c2-e134f884e93e',
|
|
||||||
'display_id': 'de-afspraak-veilt-voor-de-warmste-week',
|
|
||||||
'ext': 'flv',
|
|
||||||
'title': 'De afspraak veilt voor de Warmste Week',
|
|
||||||
'description': 'md5:24cb860c320dc2be7358e0e5aa317ba6',
|
|
||||||
'thumbnail': r're:^https?://.*\.jpg$',
|
|
||||||
'duration': 49.02,
|
|
||||||
},
|
|
||||||
'expected_warnings': ['is not a supported codec'],
|
|
||||||
}, {
|
|
||||||
# with subtitles
|
|
||||||
'url': 'http://www.canvas.be/video/panorama/2016/pieter-0167',
|
|
||||||
'info_dict': {
|
|
||||||
'id': 'mz-ast-5240ff21-2d30-4101-bba6-92b5ec67c625',
|
|
||||||
'display_id': 'pieter-0167',
|
|
||||||
'ext': 'mp4',
|
|
||||||
'title': 'Pieter 0167',
|
|
||||||
'description': 'md5:943cd30f48a5d29ba02c3a104dc4ec4e',
|
|
||||||
'thumbnail': r're:^https?://.*\.jpg$',
|
|
||||||
'duration': 2553.08,
|
|
||||||
'subtitles': {
|
|
||||||
'nl': [{
|
|
||||||
'ext': 'vtt',
|
|
||||||
}],
|
|
||||||
},
|
|
||||||
},
|
|
||||||
'params': {
|
|
||||||
'skip_download': True,
|
|
||||||
},
|
|
||||||
'skip': 'Pagina niet gevonden',
|
|
||||||
}, {
|
|
||||||
'url': 'https://www.een.be/thuis/emma-pakt-thilly-aan',
|
|
||||||
'info_dict': {
|
|
||||||
'id': 'md-ast-3a24ced2-64d7-44fb-b4ed-ed1aafbf90b8',
|
|
||||||
'display_id': 'emma-pakt-thilly-aan',
|
|
||||||
'ext': 'mp4',
|
|
||||||
'title': 'Emma pakt Thilly aan',
|
|
||||||
'description': 'md5:c5c9b572388a99b2690030afa3f3bad7',
|
|
||||||
'thumbnail': r're:^https?://.*\.jpg$',
|
|
||||||
'duration': 118.24,
|
|
||||||
},
|
|
||||||
'params': {
|
|
||||||
'skip_download': True,
|
|
||||||
},
|
|
||||||
'expected_warnings': ['is not a supported codec'],
|
|
||||||
}, {
|
|
||||||
'url': 'https://www.canvas.be/check-point/najaar-2016/de-politie-uw-vriend',
|
|
||||||
'only_matching': True,
|
|
||||||
}]
|
|
||||||
|
|
||||||
def _real_extract(self, url):
|
|
||||||
mobj = self._match_valid_url(url)
|
|
||||||
site_id, display_id = mobj.group('site_id'), mobj.group('id')
|
|
||||||
|
|
||||||
webpage = self._download_webpage(url, display_id)
|
|
||||||
|
|
||||||
title = strip_or_none(self._search_regex(
|
|
||||||
r'<h1[^>]+class="video__body__header__title"[^>]*>(.+?)</h1>',
|
|
||||||
webpage, 'title', default=None) or self._og_search_title(
|
|
||||||
webpage, default=None))
|
|
||||||
|
|
||||||
video_id = self._html_search_regex(
|
|
||||||
r'data-video=(["\'])(?P<id>(?:(?!\1).)+)\1', webpage, 'video id',
|
|
||||||
group='id')
|
|
||||||
|
|
||||||
return {
|
|
||||||
'_type': 'url_transparent',
|
|
||||||
'url': 'https://mediazone.vrt.be/api/v1/%s/assets/%s' % (site_id, video_id),
|
|
||||||
'ie_key': CanvasIE.ie_key(),
|
|
||||||
'id': video_id,
|
|
||||||
'display_id': display_id,
|
|
||||||
'title': title,
|
|
||||||
'description': self._og_search_description(webpage),
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
class VrtNUIE(GigyaBaseIE):
|
|
||||||
IE_DESC = 'VrtNU.be'
|
|
||||||
_VALID_URL = r'https?://(?:www\.)?vrt\.be/vrtnu/a-z/(?:[^/]+/){2}(?P<id>[^/?#&]+)'
|
|
||||||
_TESTS = [{
|
|
||||||
# Available via old API endpoint
|
|
||||||
'url': 'https://www.vrt.be/vrtnu/a-z/postbus-x/1989/postbus-x-s1989a1/',
|
|
||||||
'info_dict': {
|
|
||||||
'id': 'pbs-pub-e8713dac-899e-41de-9313-81269f4c04ac$vid-90c932b1-e21d-4fb8-99b1-db7b49cf74de',
|
|
||||||
'ext': 'mp4',
|
|
||||||
'title': 'Postbus X - Aflevering 1 (Seizoen 1989)',
|
|
||||||
'description': 'md5:b704f669eb9262da4c55b33d7c6ed4b7',
|
|
||||||
'duration': 1457.04,
|
|
||||||
'thumbnail': r're:^https?://.*\.jpg$',
|
|
||||||
'series': 'Postbus X',
|
|
||||||
'season': 'Seizoen 1989',
|
|
||||||
'season_number': 1989,
|
|
||||||
'episode': 'De zwarte weduwe',
|
|
||||||
'episode_number': 1,
|
|
||||||
'timestamp': 1595822400,
|
|
||||||
'upload_date': '20200727',
|
|
||||||
},
|
|
||||||
'skip': 'This video is only available for registered users',
|
|
||||||
'expected_warnings': ['is not a supported codec'],
|
|
||||||
}, {
|
|
||||||
# Only available via new API endpoint
|
|
||||||
'url': 'https://www.vrt.be/vrtnu/a-z/kamp-waes/1/kamp-waes-s1a5/',
|
|
||||||
'info_dict': {
|
|
||||||
'id': 'pbs-pub-0763b56c-64fb-4d38-b95b-af60bf433c71$vid-ad36a73c-4735-4f1f-b2c0-a38e6e6aa7e1',
|
|
||||||
'ext': 'mp4',
|
|
||||||
'title': 'Aflevering 5',
|
|
||||||
'description': 'Wie valt door de mand tijdens een missie?',
|
|
||||||
'duration': 2967.06,
|
|
||||||
'season': 'Season 1',
|
|
||||||
'season_number': 1,
|
|
||||||
'episode_number': 5,
|
|
||||||
},
|
|
||||||
'skip': 'This video is only available for registered users',
|
|
||||||
'expected_warnings': ['Unable to download asset JSON', 'is not a supported codec', 'Unknown MIME type'],
|
|
||||||
}]
|
|
||||||
_NETRC_MACHINE = 'vrtnu'
|
|
||||||
_APIKEY = '3_0Z2HujMtiWq_pkAjgnS2Md2E11a1AwZjYiBETtwNE-EoEHDINgtnvcAOpNgmrVGy'
|
|
||||||
_CONTEXT_ID = 'R3595707040'
|
|
||||||
|
|
||||||
def _perform_login(self, username, password):
|
|
||||||
auth_info = self._gigya_login({
|
|
||||||
'APIKey': self._APIKEY,
|
|
||||||
'targetEnv': 'jssdk',
|
|
||||||
'loginID': username,
|
|
||||||
'password': password,
|
|
||||||
'authMode': 'cookie',
|
|
||||||
})
|
|
||||||
|
|
||||||
if auth_info.get('errorDetails'):
|
|
||||||
raise ExtractorError('Unable to login: VrtNU said: ' + auth_info.get('errorDetails'), expected=True)
|
|
||||||
|
|
||||||
# Sometimes authentication fails for no good reason, retry
|
|
||||||
login_attempt = 1
|
|
||||||
while login_attempt <= 3:
|
|
||||||
try:
|
|
||||||
self._request_webpage('https://token.vrt.be/vrtnuinitlogin',
|
|
||||||
None, note='Requesting XSRF Token', errnote='Could not get XSRF Token',
|
|
||||||
query={'provider': 'site', 'destination': 'https://www.vrt.be/vrtnu/'})
|
|
||||||
|
|
||||||
post_data = {
|
|
||||||
'UID': auth_info['UID'],
|
|
||||||
'UIDSignature': auth_info['UIDSignature'],
|
|
||||||
'signatureTimestamp': auth_info['signatureTimestamp'],
|
|
||||||
'_csrf': self._get_cookies('https://login.vrt.be').get('OIDCXSRF').value,
|
|
||||||
}
|
|
||||||
|
|
||||||
self._request_webpage(
|
|
||||||
'https://login.vrt.be/perform_login',
|
|
||||||
None, note='Performing login', errnote='perform login failed',
|
|
||||||
headers={}, query={
|
|
||||||
'client_id': 'vrtnu-site'
|
|
||||||
}, data=urlencode_postdata(post_data))
|
|
||||||
|
|
||||||
except ExtractorError as e:
|
|
||||||
if isinstance(e.cause, compat_HTTPError) and e.cause.code == 401:
|
|
||||||
login_attempt += 1
|
|
||||||
self.report_warning('Authentication failed')
|
|
||||||
self._sleep(1, None, msg_template='Waiting for %(timeout)s seconds before trying again')
|
|
||||||
else:
|
|
||||||
raise e
|
|
||||||
else:
|
|
||||||
break
|
|
||||||
|
|
||||||
def _real_extract(self, url):
|
|
||||||
display_id = self._match_id(url)
|
|
||||||
|
|
||||||
webpage = self._download_webpage(url, display_id)
|
|
||||||
|
|
||||||
attrs = extract_attributes(self._search_regex(
|
|
||||||
r'(<nui-media[^>]+>)', webpage, 'media element'))
|
|
||||||
video_id = attrs['videoid']
|
|
||||||
publication_id = attrs.get('publicationid')
|
|
||||||
if publication_id:
|
|
||||||
video_id = publication_id + '$' + video_id
|
|
||||||
|
|
||||||
page = (self._parse_json(self._search_regex(
|
|
||||||
r'digitalData\s*=\s*({.+?});', webpage, 'digial data',
|
|
||||||
default='{}'), video_id, fatal=False) or {}).get('page') or {}
|
|
||||||
|
|
||||||
info = self._search_json_ld(webpage, display_id, default={})
|
|
||||||
return merge_dicts(info, {
|
|
||||||
'_type': 'url_transparent',
|
|
||||||
'url': 'https://mediazone.vrt.be/api/v1/vrtvideo/assets/%s' % video_id,
|
|
||||||
'ie_key': CanvasIE.ie_key(),
|
|
||||||
'id': video_id,
|
|
||||||
'display_id': display_id,
|
|
||||||
'season_number': int_or_none(page.get('episode_season')),
|
|
||||||
})
|
|
||||||
|
|
||||||
|
|
||||||
class DagelijkseKostIE(InfoExtractor):
|
|
||||||
IE_DESC = 'dagelijksekost.een.be'
|
|
||||||
_VALID_URL = r'https?://dagelijksekost\.een\.be/gerechten/(?P<id>[^/?#&]+)'
|
|
||||||
_TEST = {
|
|
||||||
'url': 'https://dagelijksekost.een.be/gerechten/hachis-parmentier-met-witloof',
|
|
||||||
'md5': '30bfffc323009a3e5f689bef6efa2365',
|
|
||||||
'info_dict': {
|
|
||||||
'id': 'md-ast-27a4d1ff-7d7b-425e-b84f-a4d227f592fa',
|
|
||||||
'display_id': 'hachis-parmentier-met-witloof',
|
|
||||||
'ext': 'mp4',
|
|
||||||
'title': 'Hachis parmentier met witloof',
|
|
||||||
'description': 'md5:9960478392d87f63567b5b117688cdc5',
|
|
||||||
'thumbnail': r're:^https?://.*\.jpg$',
|
|
||||||
'duration': 283.02,
|
|
||||||
},
|
|
||||||
'expected_warnings': ['is not a supported codec'],
|
|
||||||
}
|
|
||||||
|
|
||||||
def _real_extract(self, url):
|
|
||||||
display_id = self._match_id(url)
|
|
||||||
webpage = self._download_webpage(url, display_id)
|
|
||||||
|
|
||||||
title = strip_or_none(get_element_by_class(
|
|
||||||
'dish-metadata__title', webpage
|
|
||||||
) or self._html_search_meta(
|
|
||||||
'twitter:title', webpage))
|
|
||||||
|
|
||||||
description = clean_html(get_element_by_class(
|
|
||||||
'dish-description', webpage)
|
|
||||||
) or self._html_search_meta(
|
|
||||||
('description', 'twitter:description', 'og:description'),
|
|
||||||
webpage)
|
|
||||||
|
|
||||||
video_id = self._html_search_regex(
|
|
||||||
r'data-url=(["\'])(?P<id>(?:(?!\1).)+)\1', webpage, 'video id',
|
|
||||||
group='id')
|
|
||||||
|
|
||||||
return {
|
|
||||||
'_type': 'url_transparent',
|
|
||||||
'url': 'https://mediazone.vrt.be/api/v1/dako/assets/%s' % video_id,
|
|
||||||
'ie_key': CanvasIE.ie_key(),
|
|
||||||
'id': video_id,
|
|
||||||
'display_id': display_id,
|
|
||||||
'title': title,
|
|
||||||
'description': description,
|
|
||||||
}
|
|
@ -8,14 +8,16 @@
|
|||||||
compat_str,
|
compat_str,
|
||||||
)
|
)
|
||||||
from ..utils import (
|
from ..utils import (
|
||||||
|
ExtractorError,
|
||||||
int_or_none,
|
int_or_none,
|
||||||
join_nonempty,
|
join_nonempty,
|
||||||
js_to_json,
|
js_to_json,
|
||||||
orderedSet,
|
orderedSet,
|
||||||
|
parse_iso8601,
|
||||||
smuggle_url,
|
smuggle_url,
|
||||||
strip_or_none,
|
strip_or_none,
|
||||||
|
traverse_obj,
|
||||||
try_get,
|
try_get,
|
||||||
ExtractorError,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@ -202,7 +204,7 @@ def _real_extract(self, url):
|
|||||||
|
|
||||||
class CBCGemIE(InfoExtractor):
|
class CBCGemIE(InfoExtractor):
|
||||||
IE_NAME = 'gem.cbc.ca'
|
IE_NAME = 'gem.cbc.ca'
|
||||||
_VALID_URL = r'https?://gem\.cbc\.ca/media/(?P<id>[0-9a-z-]+/s[0-9]+[a-z][0-9]+)'
|
_VALID_URL = r'https?://gem\.cbc\.ca/(?:media/)?(?P<id>[0-9a-z-]+/s[0-9]+[a-z][0-9]+)'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
# This is a normal, public, TV show video
|
# This is a normal, public, TV show video
|
||||||
'url': 'https://gem.cbc.ca/media/schitts-creek/s06e01',
|
'url': 'https://gem.cbc.ca/media/schitts-creek/s06e01',
|
||||||
@ -245,6 +247,9 @@ class CBCGemIE(InfoExtractor):
|
|||||||
},
|
},
|
||||||
'params': {'format': 'bv'},
|
'params': {'format': 'bv'},
|
||||||
'skip': 'Geo-restricted to Canada',
|
'skip': 'Geo-restricted to Canada',
|
||||||
|
}, {
|
||||||
|
'url': 'https://gem.cbc.ca/nadiyas-family-favourites/s01e01',
|
||||||
|
'only_matching': True,
|
||||||
}]
|
}]
|
||||||
|
|
||||||
_GEO_COUNTRIES = ['CA']
|
_GEO_COUNTRIES = ['CA']
|
||||||
@ -346,7 +351,9 @@ def _find_secret_formats(self, formats, video_id):
|
|||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
video_id = self._match_id(url)
|
video_id = self._match_id(url)
|
||||||
video_info = self._download_json('https://services.radio-canada.ca/ott/cbc-api/v2/assets/' + video_id, video_id)
|
video_info = self._download_json(
|
||||||
|
f'https://services.radio-canada.ca/ott/cbc-api/v2/assets/{video_id}',
|
||||||
|
video_id, expected_status=426)
|
||||||
|
|
||||||
email, password = self._get_login_info()
|
email, password = self._get_login_info()
|
||||||
if email and password:
|
if email and password:
|
||||||
@ -401,7 +408,7 @@ def _real_extract(self, url):
|
|||||||
|
|
||||||
class CBCGemPlaylistIE(InfoExtractor):
|
class CBCGemPlaylistIE(InfoExtractor):
|
||||||
IE_NAME = 'gem.cbc.ca:playlist'
|
IE_NAME = 'gem.cbc.ca:playlist'
|
||||||
_VALID_URL = r'https?://gem\.cbc\.ca/media/(?P<id>(?P<show>[0-9a-z-]+)/s(?P<season>[0-9]+))/?(?:[?#]|$)'
|
_VALID_URL = r'https?://gem\.cbc\.ca/(?:media/)?(?P<id>(?P<show>[0-9a-z-]+)/s(?P<season>[0-9]+))/?(?:[?#]|$)'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
# TV show playlist, all public videos
|
# TV show playlist, all public videos
|
||||||
'url': 'https://gem.cbc.ca/media/schitts-creek/s06',
|
'url': 'https://gem.cbc.ca/media/schitts-creek/s06',
|
||||||
@ -411,6 +418,9 @@ class CBCGemPlaylistIE(InfoExtractor):
|
|||||||
'title': 'Season 6',
|
'title': 'Season 6',
|
||||||
'description': 'md5:6a92104a56cbeb5818cc47884d4326a2',
|
'description': 'md5:6a92104a56cbeb5818cc47884d4326a2',
|
||||||
},
|
},
|
||||||
|
}, {
|
||||||
|
'url': 'https://gem.cbc.ca/schitts-creek/s06',
|
||||||
|
'only_matching': True,
|
||||||
}]
|
}]
|
||||||
_API_BASE = 'https://services.radio-canada.ca/ott/cbc-api/v2/shows/'
|
_API_BASE = 'https://services.radio-canada.ca/ott/cbc-api/v2/shows/'
|
||||||
|
|
||||||
@ -418,7 +428,7 @@ def _real_extract(self, url):
|
|||||||
match = self._match_valid_url(url)
|
match = self._match_valid_url(url)
|
||||||
season_id = match.group('id')
|
season_id = match.group('id')
|
||||||
show = match.group('show')
|
show = match.group('show')
|
||||||
show_info = self._download_json(self._API_BASE + show, season_id)
|
show_info = self._download_json(self._API_BASE + show, season_id, expected_status=426)
|
||||||
season = int(match.group('season'))
|
season = int(match.group('season'))
|
||||||
|
|
||||||
season_info = next((s for s in show_info['seasons'] if s.get('season') == season), None)
|
season_info = next((s for s in show_info['seasons'] if s.get('season') == season), None)
|
||||||
@ -470,49 +480,90 @@ def _real_extract(self, url):
|
|||||||
|
|
||||||
class CBCGemLiveIE(InfoExtractor):
|
class CBCGemLiveIE(InfoExtractor):
|
||||||
IE_NAME = 'gem.cbc.ca:live'
|
IE_NAME = 'gem.cbc.ca:live'
|
||||||
_VALID_URL = r'https?://gem\.cbc\.ca/live/(?P<id>\d+)'
|
_VALID_URL = r'https?://gem\.cbc\.ca/live(?:-event)?/(?P<id>\d+)'
|
||||||
_TEST = {
|
_TESTS = [
|
||||||
'url': 'https://gem.cbc.ca/live/920604739687',
|
{
|
||||||
'info_dict': {
|
'url': 'https://gem.cbc.ca/live/920604739687',
|
||||||
'title': 'Ottawa',
|
'info_dict': {
|
||||||
'description': 'The live TV channel and local programming from Ottawa',
|
'title': 'Ottawa',
|
||||||
'thumbnail': 'https://thumbnails.cbc.ca/maven_legacy/thumbnails/CBC_OTT_VMS/Live_Channel_Static_Images/Ottawa_2880x1620.jpg',
|
'description': 'The live TV channel and local programming from Ottawa',
|
||||||
'is_live': True,
|
'thumbnail': 'https://thumbnails.cbc.ca/maven_legacy/thumbnails/CBC_OTT_VMS/Live_Channel_Static_Images/Ottawa_2880x1620.jpg',
|
||||||
'id': 'AyqZwxRqh8EH',
|
'is_live': True,
|
||||||
'ext': 'mp4',
|
'id': 'AyqZwxRqh8EH',
|
||||||
'timestamp': 1492106160,
|
'ext': 'mp4',
|
||||||
'upload_date': '20170413',
|
'timestamp': 1492106160,
|
||||||
'uploader': 'CBCC-NEW',
|
'upload_date': '20170413',
|
||||||
|
'uploader': 'CBCC-NEW',
|
||||||
|
},
|
||||||
|
'skip': 'Live might have ended',
|
||||||
},
|
},
|
||||||
'skip': 'Live might have ended',
|
{
|
||||||
}
|
'url': 'https://gem.cbc.ca/live/44',
|
||||||
|
'info_dict': {
|
||||||
# It's unclear where the chars at the end come from, but they appear to be
|
'id': '44',
|
||||||
# constant. Might need updating in the future.
|
'ext': 'mp4',
|
||||||
# There are two URLs, some livestreams are in one, and some
|
'is_live': True,
|
||||||
# in the other. The JSON schema is the same for both.
|
'title': r're:^Ottawa [0-9\-: ]+',
|
||||||
_API_URLS = ['https://tpfeed.cbc.ca/f/ExhSPC/t_t3UKJR6MAT', 'https://tpfeed.cbc.ca/f/ExhSPC/FNiv9xQx_BnT']
|
'description': 'The live TV channel and local programming from Ottawa',
|
||||||
|
'live_status': 'is_live',
|
||||||
|
'thumbnail': r're:https://images.gem.cbc.ca/v1/cbc-gem/live/.*'
|
||||||
|
},
|
||||||
|
'params': {'skip_download': True},
|
||||||
|
'skip': 'Live might have ended',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'url': 'https://gem.cbc.ca/live-event/10835',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '10835',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'is_live': True,
|
||||||
|
'title': r're:^The National \| Biden’s trip wraps up, Paltrow testifies, Bird flu [0-9\-: ]+',
|
||||||
|
'description': 'March 24, 2023 | President Biden’s Ottawa visit ends with big pledges from both countries. Plus, Gwyneth Paltrow testifies in her ski collision trial.',
|
||||||
|
'live_status': 'is_live',
|
||||||
|
'thumbnail': r're:https://images.gem.cbc.ca/v1/cbc-gem/live/.*',
|
||||||
|
'timestamp': 1679706000,
|
||||||
|
'upload_date': '20230325',
|
||||||
|
},
|
||||||
|
'params': {'skip_download': True},
|
||||||
|
'skip': 'Live might have ended',
|
||||||
|
}
|
||||||
|
]
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
video_id = self._match_id(url)
|
video_id = self._match_id(url)
|
||||||
|
webpage = self._download_webpage(url, video_id)
|
||||||
|
video_info = self._search_nextjs_data(webpage, video_id)['props']['pageProps']['data']
|
||||||
|
|
||||||
for api_url in self._API_URLS:
|
# Two types of metadata JSON
|
||||||
video_info = next((
|
if not video_info.get('formattedIdMedia'):
|
||||||
stream for stream in self._download_json(api_url, video_id)['entries']
|
video_info = traverse_obj(
|
||||||
if stream.get('guid') == video_id), None)
|
video_info, (('freeTv', ('streams', ...)), 'items', lambda _, v: v['key'] == video_id, {dict}),
|
||||||
if video_info:
|
get_all=False, default={})
|
||||||
break
|
|
||||||
else:
|
video_stream_id = video_info.get('formattedIdMedia')
|
||||||
|
if not video_stream_id:
|
||||||
raise ExtractorError('Couldn\'t find video metadata, maybe this livestream is now offline', expected=True)
|
raise ExtractorError('Couldn\'t find video metadata, maybe this livestream is now offline', expected=True)
|
||||||
|
|
||||||
|
stream_data = self._download_json(
|
||||||
|
'https://services.radio-canada.ca/media/validation/v2/', video_id, query={
|
||||||
|
'appCode': 'mpx',
|
||||||
|
'connectionType': 'hd',
|
||||||
|
'deviceType': 'ipad',
|
||||||
|
'idMedia': video_stream_id,
|
||||||
|
'multibitrate': 'true',
|
||||||
|
'output': 'json',
|
||||||
|
'tech': 'hls',
|
||||||
|
'manifestType': 'desktop',
|
||||||
|
})
|
||||||
|
|
||||||
return {
|
return {
|
||||||
'_type': 'url_transparent',
|
|
||||||
'ie_key': 'ThePlatform',
|
|
||||||
'url': video_info['content'][0]['url'],
|
|
||||||
'id': video_id,
|
'id': video_id,
|
||||||
'title': video_info.get('title'),
|
'formats': self._extract_m3u8_formats(stream_data['url'], video_id, 'mp4', live=True),
|
||||||
'description': video_info.get('description'),
|
|
||||||
'tags': try_get(video_info, lambda x: x['keywords'].split(', ')),
|
|
||||||
'thumbnail': video_info.get('cbc$staticImage'),
|
|
||||||
'is_live': True,
|
'is_live': True,
|
||||||
|
**traverse_obj(video_info, {
|
||||||
|
'title': 'title',
|
||||||
|
'description': 'description',
|
||||||
|
'thumbnail': ('images', 'card', 'url'),
|
||||||
|
'timestamp': ('airDate', {parse_iso8601}),
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
@ -1,8 +1,14 @@
|
|||||||
|
from .brightcove import BrightcoveNewIE
|
||||||
|
from .common import InfoExtractor
|
||||||
from .theplatform import ThePlatformFeedIE
|
from .theplatform import ThePlatformFeedIE
|
||||||
|
from .youtube import YoutubeIE
|
||||||
from ..utils import (
|
from ..utils import (
|
||||||
ExtractorError,
|
ExtractorError,
|
||||||
|
extract_attributes,
|
||||||
|
get_element_html_by_id,
|
||||||
int_or_none,
|
int_or_none,
|
||||||
find_xpath_attr,
|
find_xpath_attr,
|
||||||
|
smuggle_url,
|
||||||
xpath_element,
|
xpath_element,
|
||||||
xpath_text,
|
xpath_text,
|
||||||
update_url_query,
|
update_url_query,
|
||||||
@ -162,3 +168,110 @@ def _extract_video_info(self, content_id, site='cbs', mpx_acc=2198311517):
|
|||||||
'duration': int_or_none(xpath_text(video_data, 'videoLength'), 1000),
|
'duration': int_or_none(xpath_text(video_data, 'videoLength'), 1000),
|
||||||
'thumbnail': url_or_none(xpath_text(video_data, 'previewImageURL')),
|
'thumbnail': url_or_none(xpath_text(video_data, 'previewImageURL')),
|
||||||
})
|
})
|
||||||
|
|
||||||
|
|
||||||
|
class ParamountPressExpressIE(InfoExtractor):
|
||||||
|
_VALID_URL = r'https?://(?:www\.)?paramountpressexpress\.com(?:/[\w-]+)+/(?P<yt>yt-)?video/?\?watch=(?P<id>[\w-]+)'
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://www.paramountpressexpress.com/cbs-entertainment/shows/survivor/video/?watch=pnzew7e2hx',
|
||||||
|
'md5': '56631dbcadaab980d1fc47cb7b76cba4',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '6322981580112',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'I’m Felicia',
|
||||||
|
'description': 'md5:88fad93f8eede1c9c8f390239e4c6290',
|
||||||
|
'uploader_id': '6055873637001',
|
||||||
|
'upload_date': '20230320',
|
||||||
|
'timestamp': 1679334960,
|
||||||
|
'duration': 49.557,
|
||||||
|
'thumbnail': r're:^https://.+\.jpg',
|
||||||
|
'tags': [],
|
||||||
|
},
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.paramountpressexpress.com/cbs-entertainment/video/?watch=2s5eh8kppc',
|
||||||
|
'md5': 'edcb03e3210b88a3e56c05aa863e0e5b',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '6323036027112',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': '‘Y&R’ Set Visit: Jerry O’Connell Quizzes Cast on Pre-Love Scene Rituals and More',
|
||||||
|
'description': 'md5:b929867a357aac5544b783d834c78383',
|
||||||
|
'uploader_id': '6055873637001',
|
||||||
|
'upload_date': '20230321',
|
||||||
|
'timestamp': 1679430180,
|
||||||
|
'duration': 132.032,
|
||||||
|
'thumbnail': r're:^https://.+\.jpg',
|
||||||
|
'tags': [],
|
||||||
|
},
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.paramountpressexpress.com/paramount-plus/yt-video/?watch=OX9wJWOcqck',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'OX9wJWOcqck',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'Rugrats | Season 2 Official Trailer | Paramount+',
|
||||||
|
'description': 'md5:1f7e26f5625a9f0d6564d9ad97a9f7de',
|
||||||
|
'uploader': 'Paramount Plus',
|
||||||
|
'uploader_id': '@paramountplus',
|
||||||
|
'uploader_url': 'http://www.youtube.com/@paramountplus',
|
||||||
|
'channel': 'Paramount Plus',
|
||||||
|
'channel_id': 'UCrRttZIypNTA1Mrfwo745Sg',
|
||||||
|
'channel_url': 'https://www.youtube.com/channel/UCrRttZIypNTA1Mrfwo745Sg',
|
||||||
|
'upload_date': '20230316',
|
||||||
|
'duration': 88,
|
||||||
|
'age_limit': 0,
|
||||||
|
'availability': 'public',
|
||||||
|
'live_status': 'not_live',
|
||||||
|
'playable_in_embed': True,
|
||||||
|
'view_count': int,
|
||||||
|
'like_count': int,
|
||||||
|
'channel_follower_count': int,
|
||||||
|
'thumbnail': 'https://i.ytimg.com/vi/OX9wJWOcqck/maxresdefault.jpg',
|
||||||
|
'categories': ['Entertainment'],
|
||||||
|
'tags': ['Rugrats'],
|
||||||
|
},
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.paramountpressexpress.com/showtime/yt-video/?watch=_ljssSoDLkw',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '_ljssSoDLkw',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'Lavell Crawford: THEE Lavell Crawford Comedy Special Official Trailer | SHOWTIME',
|
||||||
|
'description': 'md5:39581bcc3fd810209b642609f448af70',
|
||||||
|
'uploader': 'SHOWTIME',
|
||||||
|
'uploader_id': '@Showtime',
|
||||||
|
'uploader_url': 'http://www.youtube.com/@Showtime',
|
||||||
|
'channel': 'SHOWTIME',
|
||||||
|
'channel_id': 'UCtwMWJr2BFPkuJTnSvCESSQ',
|
||||||
|
'channel_url': 'https://www.youtube.com/channel/UCtwMWJr2BFPkuJTnSvCESSQ',
|
||||||
|
'upload_date': '20230209',
|
||||||
|
'duration': 49,
|
||||||
|
'age_limit': 0,
|
||||||
|
'availability': 'public',
|
||||||
|
'live_status': 'not_live',
|
||||||
|
'playable_in_embed': True,
|
||||||
|
'view_count': int,
|
||||||
|
'like_count': int,
|
||||||
|
'comment_count': int,
|
||||||
|
'channel_follower_count': int,
|
||||||
|
'thumbnail': 'https://i.ytimg.com/vi_webp/_ljssSoDLkw/maxresdefault.webp',
|
||||||
|
'categories': ['People & Blogs'],
|
||||||
|
'tags': 'count:27',
|
||||||
|
},
|
||||||
|
}]
|
||||||
|
|
||||||
|
def _real_extract(self, url):
|
||||||
|
display_id, is_youtube = self._match_valid_url(url).group('id', 'yt')
|
||||||
|
if is_youtube:
|
||||||
|
return self.url_result(display_id, YoutubeIE)
|
||||||
|
|
||||||
|
webpage = self._download_webpage(url, display_id)
|
||||||
|
video_id = self._search_regex(
|
||||||
|
r'\bvideo_id\s*=\s*["\'](\d+)["\']\s*,', webpage, 'Brightcove ID')
|
||||||
|
token = self._search_regex(r'\btoken\s*=\s*["\']([\w.-]+)["\']', webpage, 'token')
|
||||||
|
|
||||||
|
player = extract_attributes(get_element_html_by_id('vcbrightcoveplayer', webpage) or '')
|
||||||
|
account_id = player.get('data-account') or '6055873637001'
|
||||||
|
player_id = player.get('data-player') or 'OtLKgXlO9F'
|
||||||
|
embed = player.get('data-embed') or 'default'
|
||||||
|
|
||||||
|
return self.url_result(smuggle_url(
|
||||||
|
f'https://players.brightcove.net/{account_id}/{player_id}_{embed}/index.html?videoId={video_id}',
|
||||||
|
{'token': token}), BrightcoveNewIE)
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user