diff --git a/README.md b/README.md index f4eadc415..b45a5dec1 100644 --- a/README.md +++ b/README.md @@ -418,7 +418,7 @@ In addition to this, WriteGear also provides flexible access to [**OpenCV's Vide * **Compression Mode:** In this mode, WriteGear utilizes powerful [**FFmpeg**][ffmpeg] inbuilt encoders to encode lossless multimedia files. This mode provides us the ability to exploit almost any parameter available within FFmpeg, effortlessly and flexibly, and while doing that it robustly handles all errors/warnings quietly. **You can find more about this mode [here ➶][cm-writegear-doc]** - * **Non-Compression Mode:** In this mode, WriteGear utilizes basic [**OpenCV's inbuilt VideoWriter API**][opencv-vw] tools. This mode also supports all parameters manipulation available within VideoWriter API, but it lacks the ability to manipulate encoding parameters and other important features like video compression, audio encoding, etc. **You can learn about this mode [here ➶][ncm-writegear-doc]** + * **Non-Compression Mode:** In this mode, WriteGear utilizes basic [**OpenCV's inbuilt VideoWriter API**][opencv-vw] tools. This mode also supports all parameter transformations available within OpenCV's VideoWriter API, but it lacks the ability to manipulate encoding parameters and other important features like video compression, audio encoding, etc. **You can learn about this mode [here ➶][ncm-writegear-doc]** ### WriteGear API Guide: @@ -449,10 +449,9 @@ SteamGear also creates a Manifest file _(such as MPD in-case of DASH)_ or a Mast **StreamGear primarily works in two Independent Modes for transcoding which serves different purposes:** - * **Single-Source Mode:** In this mode, StreamGear transcodes entire video/audio file _(as opposed to frames by frame)_ into a sequence of multiple smaller chunks/segments for streaming. This mode works exceptionally well, when you're transcoding lossless long-duration videos(with audio) for streaming and required no extra efforts or interruptions. But on the downside, the provided source cannot be changed or manipulated before sending onto FFmpeg Pipeline for processing. ***Learn more about this mode [here ➶][ss-mode-doc]*** - - * **Real-time Frames Mode:** In this mode, StreamGear directly transcodes video-frames _(as opposed to a entire file)_ into a sequence of multiple smaller chunks/segments for streaming. In this mode, StreamGear supports real-time [`numpy.ndarray`](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) frames, and process them over FFmpeg pipeline. But on the downside, audio has to added manually _(as separate source)_ for streams. ***Learn more about this mode [here ➶][rtf-mode-doc]*** + * **Single-Source Mode:** In this mode, StreamGear **transcodes entire video file** _(as opposed to frame-by-frame)_ into a sequence of multiple smaller chunks/segments for streaming. This mode works exceptionally well when you're transcoding long-duration lossless videos(with audio) for streaming that required no interruptions. But on the downside, the provided source cannot be flexibly manipulated or transformed before sending onto FFmpeg Pipeline for processing. ***Learn more about this mode [here ➶][ss-mode-doc]*** + * **Real-time Frames Mode:** In this mode, StreamGear directly **transcodes frame-by-frame** _(as opposed to a entire video file)_, into a sequence of multiple smaller chunks/segments for streaming. This mode works exceptionally well when you desire to flexibility manipulate or transform [`numpy.ndarray`](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) frames in real-time before sending them onto FFmpeg Pipeline for processing. But on the downside, audio has to added manually _(as separate source)_ for streams. ***Learn more about this mode [here ➶][rtf-mode-doc]*** ### StreamGear API Guide: @@ -507,7 +506,7 @@ WebGear API works on [**Starlette**](https://www.starlette.io/)'s ASGI applicati WebGear API uses an intraframe-only compression scheme under the hood where the sequence of video-frames are first encoded as JPEG-DIB (JPEG with Device-Independent Bit compression) and then streamed over HTTP using Starlette's Multipart [Streaming Response](https://www.starlette.io/responses/#streamingresponse) and a [Uvicorn](https://www.uvicorn.org/#quickstart) ASGI Server. This method imposes lower processing and memory requirements, but the quality is not the best, since JPEG compression is not very efficient for motion video. -In layman's terms, WebGear acts as a powerful **Video Broadcaster** that transmits live video-frames to any web-browser in the network. Additionally, WebGear API also provides a special internal wrapper around [VideoGear](#videogear), which itself provides internal access to both [CamGear](#camgear) and [PiGear](#pigear) APIs, thereby granting it exclusive power of broadcasting frames from any incoming stream. It also allows us to define our custom Server as source to manipulate frames easily before sending them across the network(see this [doc][webgear-cs] example). +In layman's terms, WebGear acts as a powerful **Video Broadcaster** that transmits live video-frames to any web-browser in the network. Additionally, WebGear API also provides a special internal wrapper around [VideoGear](#videogear), which itself provides internal access to both [CamGear](#camgear) and [PiGear](#pigear) APIs, thereby granting it exclusive power of broadcasting frames from any incoming stream. It also allows us to define our custom Server as source to transform frames easily before sending them across the network(see this [doc][webgear-cs] example). **Below is a snapshot of a WebGear Video Server in action on Chrome browser:** @@ -558,7 +557,7 @@ web.shutdown() WebGear_RTC is implemented with the help of [**aiortc**][aiortc] library which is built on top of asynchronous I/O framework for Web Real-Time Communication (WebRTC) and Object Real-Time Communication (ORTC) and supports many features like SDP generation/parsing, Interactive Connectivity Establishment with half-trickle and mDNS support, DTLS key and certificate generation, DTLS handshake, etc. -WebGear_RTC can handle [multiple consumers][webgear_rtc-mc] seamlessly and provides native support for ICE _(Interactive Connectivity Establishment)_ protocol, STUN _(Session Traversal Utilities for NAT)_, and TURN _(Traversal Using Relays around NAT)_ servers that help us to easily establish direct media connection with the remote peers for uninterrupted data flow. It also allows us to define our custom Server as a source to manipulate frames easily before sending them across the network(see this [doc][webgear_rtc-cs] example). +WebGear_RTC can handle [multiple consumers][webgear_rtc-mc] seamlessly and provides native support for ICE _(Interactive Connectivity Establishment)_ protocol, STUN _(Session Traversal Utilities for NAT)_, and TURN _(Traversal Using Relays around NAT)_ servers that help us to easily establish direct media connection with the remote peers for uninterrupted data flow. It also allows us to define our custom Server as a source to transform frames easily before sending them across the network(see this [doc][webgear_rtc-cs] example). WebGear_RTC API works in conjunction with [**Starlette**][starlette]'s ASGI application and provides easy access to its complete framework. WebGear_RTC can also flexibly interact with Starlette's ecosystem of shared middleware, mountable applications, [Response classes](https://www.starlette.io/responses/), [Routing tables](https://www.starlette.io/routing/), [Static Files](https://www.starlette.io/staticfiles/), [Templating engine(with Jinja2)](https://www.starlette.io/templates/), etc. @@ -615,7 +614,7 @@ web.shutdown() NetGear_Async is built on [`zmq.asyncio`][asyncio-zmq], and powered by a high-performance asyncio event loop called [**`uvloop`**][uvloop] to achieve unmatchable high-speed and lag-free video streaming over the network with minimal resource constraints. NetGear_Async can transfer thousands of frames in just a few seconds without causing any significant load on your system. -NetGear_Async provides complete server-client handling and options to use variable protocols/patterns similar to [NetGear API](#netgear). Furthermore, NetGear_Async allows us to define our custom Server as source to manipulate frames easily before sending them across the network(see this [doc][netgear_Async-cs] example). +NetGear_Async provides complete server-client handling and options to use variable protocols/patterns similar to [NetGear API](#netgear). Furthermore, NetGear_Async allows us to define our custom Server as source to transform frames easily before sending them across the network(see this [doc][netgear_Async-cs] example). NetGear_Async now supports additional [**bidirectional data transmission**][btm_netgear_async] between receiver(client) and sender(server) while transferring video-frames. Users can easily build complex applications such as like [Real-Time Video Chat][rtvc] in just few lines of code. diff --git a/codecov.yml b/codecov.yml index 544e750c8..7cef6987d 100644 --- a/codecov.yml +++ b/codecov.yml @@ -30,5 +30,6 @@ ignore: - "vidgear/tests" - "docs" - "scripts" + - "vidgear/gears/__init__.py" #trivial - "vidgear/gears/asyncio/__main__.py" #trivial - "setup.py" \ No newline at end of file diff --git a/docs/bonus/reference/helper.md b/docs/bonus/reference/helper.md index 20c94b625..2214e37f6 100644 --- a/docs/bonus/reference/helper.md +++ b/docs/bonus/reference/helper.md @@ -98,6 +98,10 @@ limitations under the License.   +::: vidgear.gears.helper.import_dependency_safe + +  + ::: vidgear.gears.helper.get_video_bitrate   diff --git a/docs/bonus/reference/helper_async.md b/docs/bonus/reference/helper_async.md index cfc329656..8e3e56b87 100644 --- a/docs/bonus/reference/helper_async.md +++ b/docs/bonus/reference/helper_async.md @@ -18,14 +18,6 @@ limitations under the License. =============================================== --> -::: vidgear.gears.asyncio.helper.logger_handler - -  - -::: vidgear.gears.asyncio.helper.mkdir_safe - -  - ::: vidgear.gears.asyncio.helper.reducer   diff --git a/docs/changelog.md b/docs/changelog.md index b21d7db5d..41d007afa 100644 --- a/docs/changelog.md +++ b/docs/changelog.md @@ -20,90 +20,412 @@ limitations under the License. # Release Notes -## v0.2.2 (In Progress) +## v0.2.2 (2021-09-02) ??? tip "New Features" + - [x] **StreamGear:** + * Native Support for Apple HLS Multi-Bitrate Streaming format: + + Added support for new [Apple HLS](https://developer.apple.com/documentation/http_live_streaming) _(HTTP Live Streaming)_ HTTP streaming format in StreamGear. + + Implemented default workflow for auto-generating primary HLS stream of same resolution and framerate as source. + + Added HLS support in *Single-Source* and *Real-time Frames* Modes. + + Implemented inherit support for `fmp4` and `mpegts` HLS segment types. + + Added adequate default parameters required for trans-coding HLS streams. + + Added native support for HLS live-streaming. + + Added `"hls"` value to `format` parameter for easily selecting HLS format. + + Added HLS support in `-streams` attribute for transcoding additional streams. + + Added support for `.m3u8` and `.ts` extensions in `clear_prev_assets` workflow. + + Added validity check for `.m3u8` extension in output when HLS format is used. + + Separated DASH and HLS command handlers. + + Created HLS format exclusive parameters. + + Implemented `-hls_base_url` FFMpeg parameter support. + * Added support for audio input from external device: + + Implemented support for audio input from external device. + + Users can now easily add audio device and decoder by formatting them as python list. + + Modified `-audio` parameter to support `list` data type as value. + + Modified `validate_audio` helper function to validate external audio devices. + * Added `-seg_duration` to control segment duration. - [x] **NetGear:** - * New SSH Tunneling Mode for connecting ZMQ sockets across machines via SSH tunneling. - * Added new `ssh_tunnel_mode` attribute to enable ssh tunneling at provide address at server end only. - * Implemented new `check_open_port` helper method to validate availability of host at given open port. - * Added new attributes `ssh_tunnel_keyfile` and `ssh_tunnel_pwd` to easily validate ssh connection. - * Extended this feature to be compatible with bi-directional mode and auto-reconnection. - * Initially disabled support for exclusive Multi-Server and Multi-Clients modes. - * Implemented logic to automatically enable `paramiko` support if installed. - * Reserved port-47 for testing. + * New SSH Tunneling Mode for remote connection: + + New SSH Tunneling Mode for connecting ZMQ sockets across machines via SSH tunneling. + + Added new `ssh_tunnel_mode` attribute to enable ssh tunneling at provide address at server end only. + + Implemented new `check_open_port` helper method to validate availability of host at given open port. + + Added new attributes `ssh_tunnel_keyfile` and `ssh_tunnel_pwd` to easily validate ssh connection. + + Extended this feature to be compatible with bi-directional mode and auto-reconnection. + + Disabled support for exclusive Multi-Server and Multi-Clients modes. + + Implemented logic to automatically enable `paramiko` support if installed. + + Reserved port-`47` for testing. + * Additional colorspace support for input frames with Frame-Compression enabled: + + Allowed to manually select colorspace on-the-fly with JPEG frame compression. + + Updated `jpeg_compression` dict parameter to support colorspace string values. + + Added all supported colorspace values by underline `simplejpeg` library. + + Server enforced frame-compression colorspace on client(s). + + Enable "BGR" colorspace by default. + + Added Example for changing incoming frames colorspace with NetGear's Frame Compression. + + Updated Frame Compression parameters in NetGear docs. + + Updated existing CI tests to cover new frame compression functionality. + - [x] **NetGear_Async:** + * New exclusive Bidirectional Mode for bidirectional data transfer: + + NetGear_Async's first-ever exclusive Bidirectional mode with pure asyncio implementation. + + :warning: Bidirectional mode is only available with User-defined Custom Source(i.e. `source=None`) + + Added support for `PAIR` & `REQ/REP` bidirectional patterns for this mode. + + Added powerful `asyncio.Queues` for handling user data and frames in real-time. + + Implemented new `transceive_data` method to Transmit _(in Recieve mode)_ and Receive _(in Send mode)_ data in real-time. + + Implemented `terminate_connection` internal asyncio method to safely terminate ZMQ connection and queues. + + Added `msgpack` automatic compression encoding and decoding of data and frames in bidirectional mode. + + Added support for `np.ndarray` video frames. + + Added new `bidirectional_mode` attribute for enabling this mode. + + Added 8-digit random alphanumeric id generator for each device. + + :warning: NetGear_Async will throw `RuntimeError` if bidirectional mode is disabled at server or client but not both. + * Added new `disable_confirmation` used to force disable termination confirmation from client in `terminate_connection`. + * Added `task_done()` method after every `get()` call to gracefully terminate queues. + * Added new `secrets` and `string` imports. + - [x] **WebGear:** + * Updated JPEG Frame compression with `simplejpeg`: + + Implemented JPEG compression algorithm for 4-5% performance boost at cost of minor loss in quality. + + Utilized `encode_jpeg` and `decode_jpeg` methods to implement turbo-JPEG transcoding with `simplejpeg`. + + Added new options to control JPEG frames *quality*, enable fastest *dct*, fast *upsampling* to boost performance. + + Added new `jpeg_compression`, `jpeg_compression_quality`, `jpeg_compression_fastdct`, `jpeg_compression_fastupsample` attributes. + + Enabled fast dct by default with JPEG frames at `90%`. + + Incremented default frame reduction to `25%`. + + Implemented automated grayscale colorspace frames handling. + + Updated old and added new usage examples. + + Dropped support for depreciated attributes from WebGear and added new attributes. + * Added new WebGear Theme: _(Checkout at https://github.com/abhiTronix/vidgear-vitals)_ + - Added responsive image scaling according to screen aspect ratios. + - Added responsive text scaling. + - Added rounded border and auto-center to image tag. + - Added bootstrap css properties to implement auto-scaling. + - Removed old `resize()` hack. + - Improved text spacing and weight. + - Integrated toggle full-screen to new implementation. + - Hide Scrollbar both in WebGear_RTC and WebGear Themes. + - Beautify files syntax and updated files checksum. + - Refactor files and removed redundant code. + - Bumped theme version to `v0.1.2`. - [x] **WebGear_RTC:** - * Added native support for middlewares. - * Added new global `middleware` variable for easily defining Middlewares as list. - * Added validity check for Middlewares. - * Added tests for middlewares support. - * Added example for middlewares support. - * Added related imports. + * Added native support for middlewares: + + Added new global `middleware` variable for easily defining Middlewares as list. + + Added validity check for Middlewares. + + Added tests for middlewares support. + + Added example for middlewares support. + + Extended middlewares support to WebGear API too. + + Added related imports. + * Added new WebGear_RTC Theme: _(Checkout at https://github.com/abhiTronix/vidgear-vitals)_ + + Implemented new responsive video scaling according to screen aspect ratios. + + Added bootstrap CSS properties to implement auto-scaling. + + Removed old `resize()` hack. + + Beautify files syntax and updated files checksum. + + Refactored files and removed redundant code. + + Bumped theme version to `v0.1.2` + - [x] **Helper:** + * New automated interpolation selection for gears: + + Implemented `retrieve_best_interpolation` method to automatically select best available interpolation within OpenCV. + + Added support for this method in WebGear, WebGear_RTC and Stabilizer Classes/APIs. + + Added new CI tests for this feature. + * Implemented `get_supported_demuxers` method to get list of supported demuxers. - [x] **CI:** - * Added new `no-response` work-flow for stale issues. - * Added NetGear CI Tests - * Added new CI tests for SSH Tunneling Mode. - * Added "paramiko" to CI dependencies. - + * Added new `no-response` work-flow for stale issues. + * Added new CI tests for SSH Tunneling Mode. + * Added `paramiko` to CI dependencies. + * Added support for `"hls"` format in existing CI tests. + * Added new functions `check_valid_m3u8` and `extract_meta_video` for validating HLS files. + * Added new `m3u8` dependency to CI workflows. + * Added complete CI tests for NetGear_Async's new Bidirectional Mode: + + Implemented new exclusive `Custom_Generator` class for testing bidirectional data dynamically on server-end. + + Implemented new exclusive `client_dataframe_iterator` method for testing bidirectional data on client-end. + + Implemented `test_netgear_async_options` and `test_netgear_async_bidirectionalmode` two new tests. + + Added `timeout` value on server end in CI tests. + - [x] **Setup.py:** + * Added new `cython` and `msgpack` dependency. + * Added `msgpack` and `msgpack_numpy` to auto-install latest. + - [x] **BASH:** + * Added new `temp_m3u8` folder for generating M3U8 assets in CI tests. - [x] **Docs:** - * Added Zenodo DOI badge and its reference in BibTex citations. - * Added `pymdownx.striphtml` plugin for stripping comments. - * Added complete docs for SSH Tunneling Mode. - * Added complete docs for NetGear's SSH Tunneling Mode. - * Added new usage example and related information. - * Added new image assets for ssh tunneling example. - * New admonitions and beautified css + * Added docs for new Apple HLS StreamGear format: + + Added StreamGear HLS transcoding examples for both StreamGear modes. + + Updated StreamGear parameters to w.r.t new HLS configurations. + + Added open-sourced *"Sintel" - project Durian Teaser Demo* with StreamGear's HLS stream using `Clappr` and raw.githack.com. + + Added new HLS chunks at https://github.com/abhiTronix/vidgear-docs-additionals for StreamGear + + Added support for HLS video in Clappr within `custom.js` using HlsjsPlayback plugin. + + Added support for Video Thumbnail preview for HLS video in Clappr within `custom.js` + + Added `hlsjs-playback.min.js` JS script and suitable configuration for HlsjsPlayback plugin. + + Added custom labels for quality levels selector in `custom.js`. + + Added new docs content related to new Apple HLS format. + + Updated DASH chunk folder at https://github.com/abhiTronix/vidgear-docs-additionals. + + Added example for audio input support from external device in StreamGear. + + Added steps for using `-audio` attribute on different OS platforms in StreamGear. + * Added usage examples for NetGear_Async's Bidirectional Mode: + + Added new Usage examples and Reference doc for NetGear_Async's Bidirectional Mode. + + Added new image asset for NetGear_Async's Bidirectional Mode. + + Added NetGear_Async's `option` parameter reference. + + Updated NetGear_Async definition in docs. + + Changed font size for Helper methods. + + Renamed `Bonus` section to `References` in `mkdocs.yml`. + * Added Gitter sidecard embed widget: + + Imported gitter-sidecar script to `main.html`. + + Updated `custom.js` to set global window option. + + Updated Sidecard UI in `custom.css`. + * Added bonus examples to help section: + + Implemented a curated list of more advanced examples with unusual configuration for each API. + * Added several new contents and updated context. + * Added support for search suggestions, search highlighting and search sharing _(i.e. deep linking)_ + * Added more content to docs to make it more user-friendly. + * Added warning that JPEG Frame-Compression is disabled with Custom Source in WebGear. + * Added steps for identifying and specifying sound card on different OS platforms in WriteGear. + * Added Zenodo DOI badge and its reference in BibTex citations. + * Added `extra.homepage` parameter, which allows for setting a dedicated URL for `site_url`. + * Added `pymdownx.striphtml` plugin for stripping comments. + * Added complete docs for SSH Tunneling Mode. + * Added complete docs for NetGear's SSH Tunneling Mode. + * Added `pip` upgrade related docs. + * Added docs for installing vidgear with only selective dependencies + * Added new `advance`/`experiment` admonition with new background color. + * Added new icons SVGs for `advance` and `warning` admonition. + * Added new usage example and related information. + * Added new image assets for ssh tunneling example. + * Added new admonitions + * Added new FAQs. ??? success "Updates/Improvements" - - [x] Added exception for RunTimeErrors in NetGear CI tests. - - [x] Extended Middlewares support to WebGear API too. + - [x] VidGear Core: + * New behavior to virtually isolate optional API specific dependencies by silencing `ImportError` on all VidGear's APIs import. + * Implemented algorithm to cache all imports on startup but silence any `ImportError` on missing optional dependency. + * :warning: Now `ImportError` will be raised only any certain API specific dependency is missing during given API's initialization. + * New `import_dependency_safe` to imports specified dependency safely with `importlib` module. + * Replaced all APIs imports with `import_dependency_safe`. + * Added support for relative imports in `import_dependency_safe`. + * Implemented `error` parameter to by default `ImportError` with a meaningful message if a dependency is missing, Otherwise if `error = log` a warning will be logged and on `error = silent` everything will be quit. But If a dependency is present, but older than specified, an error is raised if specified. + * Implemented behavior that if a dependency is present, but older than `min_version` specified, an error is raised always. + * Implemented `custom_message` to display custom message on error instead of default one. + * Implemented separate `import_core_dependency` function to import and check for specified core dependency. + * `ImportError` will be raised immediately if core dependency not found. + - [x] StreamGear: + * Replaced depreciated `-min_seg_duration` flag with `-seg_duration`. + * Removed redundant `-re` flag from RTFM. + * Improved Live-Streaming performance by disabling SegmentTimline + * Improved DASH assets detection for removal by using filename prefixes. + - [x] NetGear: + * Replaced `np.newaxis` with `np.expand_dims`. + * Replaced `random` module with `secrets` while generating system ID. + * Update array indexing with `np.copy`. + - [x] NetGear_Async: + * Improved custom source handling. + * Removed deprecated `loop` parameter from asyncio methods. + * Re-implemented `skip_loop` parameter in `close()` method. + * :warning: `run_until_complete` will not used if `skip_loop` is enabled. + * :warning: `skip_loop` now will create asyncio task instead and will enable `disable_confirmation` by default. + * Replaced `create_task` with `ensure_future` to ensure backward compatibility with python-3.6 legacies. + * Simplified code for `transceive_data` method. + - [x] WebGear_RTC: + * Improved handling of failed ICE connection. + * Made `is_running` variable globally available for internal use. + - [x] Helper: + * Added `4320p` resolution support to `dimensions_to_resolutions` method. + * Implemented new `delete_file_safe` to safely delete files at given path. + * Replaced `os.remove` calls with `delete_file_safe`. + * Added support for filename prefixes in `delete_ext_safe` method. + * Improved and simplified `create_blank_frame` functions frame channels detection. + * Added `logging` parameter to capPropId function to forcefully discard any error(if required). + - [x] Setup.py: + * Added patch for `numpy` dependency, `numpy` recently dropped support for python 3.6.x legacies. See https://github.com/numpy/numpy/releases/tag/v1.20.0 + * Removed version check on certain dependencies. + * Re-added `aiortc` to auto-install latest version. + - [x] Asyncio: + * Changed `asyncio.sleep` value to `0`. + + The amount of time sleep is irrelevant; the only purpose await asyncio.sleep() serves is to force asyncio to suspend execution to the event loop, and give other tasks a chance to run. Also, `await asyncio.sleep(0)` will achieve the same effect. https://stackoverflow.com/a/55782965/10158117 + - [x] License: + * Dropped publication year range to avoid confusion. _(Signed and Approved by @abhiTronix)_ + * Updated Vidgear license's year of first publication of the work in accordance with US copyright notices defined by Title 17, Chapter 4(Visually perceptible copies): https://www.copyright.gov/title17/92chap4.html + * Reflected changes in all copyright notices. + - [x] CI: + * Updated macOS VM Image to latest in azure devops. + * Updated VidGear Docs Deployer Workflow. + * Updated WebGear_RTC CI tests. + * Removed redundant code from CI tests. + * Updated tests to increase coverage. + * Enabled Helper tests for python 3.8+ legacies. + * Enabled logging in `validate_video` method. + * Added `-hls_base_url` to streamgear tests. + * Update `mpegdash` dependency to `0.3.0-dev2` version in Appveyor. + * Updated CI tests for new HLS support + * Updated CI tests from scratch for new native HLS support in StreamGear. + * Updated test patch for StreamGear. + * Added exception for RunTimeErrors in NetGear CI tests. + * Added more directories to Codecov ignore list. + * Imported relative `logger_handler` for asyncio tests. - [x] Docs: - * Added `extra.homepage` parameter, which allows for setting a dedicated URL for `site_url`. * Re-positioned few docs comments at bottom for easier detection during stripping. + * Updated to new extra `analytics` parameter in Material Mkdocs. * Updated dark theme to `dark orange`. + * Changed fonts => text: `Muli` & code: `Fira Code` * Updated fonts to `Source Sans Pro`. - * Fixed missing heading in VideoGear. - * Update setup.py update link for assets. - * Added missing StreamGear Code docs. + * Updated `setup.py` update-link for modules. + * Re-added missing StreamGear Code docs. * Several minor tweaks and typos fixed. - * Updated 404 page and workflow. - * Updated README.md and mkdocs.yml with new additions. + * Updated `404.html` page. + * Updated admonitions colors and beautified `custom.css`. + * Replaced VideoGear & CamGear with OpenCV in CPU intensive examples. + * Updated `mkdocs.yml` with new changes and URLs. + * Moved FAQ examples to bonus examples. + * Moved StreamGear primary modes to separate sections for better readability. + * Implemented separate overview and usage example pages for StreamGear primary modes. + * Improved StreamGear docs context and simplified language. + * Renamed StreamGear `overview` page to `introduction`. * Re-written Threaded-Queue-Mode from scratch with elaborated functioning. - * Replace Paypal with Liberpay in FUNDING.yml + * Replace *Paypal* with *Liberpay* in `FUNDING.yml`. * Updated FFmpeg Download links. - * Restructured docs. - * Updated mkdocs.yml. - - [x] Helper: - * Implemented new `delete_file_safe` to safely delete files at given path. - * Replaced `os.remove` calls with `delete_file_safe`. - - [x] CI: - * Updated VidGear Docs Deployer Workflow - * Updated test + * Reverted UI change in CSS. + * Updated `changelog.md` and fixed clutter. + * Updated `README.md` and `mkdocs.yml` with new additions + * Updated context for CamGear example. + * Restructured and added more content to docs. + * Updated comments in source code. + * Removed redundant data table tweaks from `custom.css`. + * Re-aligned badges in README.md. + * Beautify `custom.css`. + * Updated `mkdocs.yml`. + * Updated context and fixed typos. + * Added missing helper methods in Reference. + * Updated Admonitions. + * Updates images assets. + * Bumped CodeCov. + - [x] Logging: + * Improved logging level-names. + * Updated logging messages. + - [x] Minor tweaks to `needs-more-info` template. - [x] Updated issue templates and labels. + - [x] Removed redundant imports. ??? danger "Breaking Updates/Changes" + - [ ] Virtually isolated all API specific dependencies, Now `ImportError` for API-specific dependencies will be raised only when any of them is missing at API's initialization. - [ ] Renamed `delete_safe` to `delete_ext_safe`. - + - [ ] Dropped support for `frame_jpeg_quality`, `frame_jpeg_optimize`, `frame_jpeg_progressive` attributes from WebGear. ??? bug "Bug-fixes" - - [x] Critical Bugfix related to OpenCV Binaries import. - * Bug fixed for OpenCV import comparsion test failing with Legacy versions and throwing ImportError. - * Replaced `packaging.parse_version` with more robust `distutils.version`. - * Removed redundant imports. - - [x] Setup: + - [x] CamGear: + * Hot-fix for Live Camera Streams: + + Added new event flag to keep check on stream read. + + Implemented event wait for `read()` to block it when source stream is busy. + + Added and Linked `THREAD_TIMEOUT` with event wait timout. + + Improved backward compatibility of new additions. + * Enforced logging for YouTube live. + - [x] NetGear: + * Fixed Bidirectional Video-Frame Transfer broken with frame-compression: + + Fixed `return_data` interfering with return JSON-data in receive mode. + + Fixed logic. + * Fixed color-subsampling interfering with colorspace. + * Patched external `simplejpeg` bug. Issue: https://gitlab.com/jfolz/simplejpeg/-/issues/11 + + Added `np.squeeze` to drop grayscale frame's 3rd dimension on Client's end. + * Fixed bug that cause server end frame dimensions differ from client's end when frame compression enabled. + - [X] NetGear_Async: + * Fixed bug related asyncio queue freezing on calling `join()`. + * Fixed ZMQ connection bugs in bidirectional mode. + * Fixed several critical bugs in event loop handling. + * Fixed several bugs in bidirectional mode implementation. + * Fixed missing socket termination in both server and client end. + * Fixed `timeout` parameter logic. + * Fixed typos in error messages. + - [x] WebGear_RTC: + * Fixed stream freezes after web-page reloading: + + Implemented new algorithm to continue stream even when webpage is reloaded. + + Inherit and modified `next_timestamp` VideoStreamTrack method for generating accurate timestamps. + + Implemented `reset_connections` callable to reset all peer connections and recreate Video-Server timestamps. (Implemented by @kpetrykin) + + Added `close_connection` endpoint in JavaScript to inform server page refreshing.(Thanks to @kpetrykin) + + Added exclusive reset connection node `/close_connection` in routes. + + Added `reset()` method to Video-Server class for manually resetting timestamp clock. + + Added `reset_enabled` flag to keep check on reloads. + + Fixed premature webpage auto-reloading. + + Added additional related imports. + * Fixed web-page reloading bug after stream ended: + + Disable webpage reload behavior handling for Live broadcasting. + + Disable reload CI test on Windows machines due to random failures. + + Improved handling of failed ICE connection. + * Fixed Assertion error bug: + + Source must raise MediaStreamError when stream ends instead of returning None-type. + - [x] WebGear + * Removed format specific OpenCV decoding and encoding support for WebGear. + - [x] Helper: + * Regex bugs fixed: + + New improved regex for discovering supported encoders in `get_supported_vencoders`. + + Re-implemented check for extracting only valid output protocols in `is_valid_url`. + + Minor tweaks for better regex compatibility. + * Bugfix related to OpenCV import: + + Bug fixed for OpenCV import comparison test failing with Legacy versions and throwing `ImportError`. + + Replaced `packaging.parse_version` with more robust `distutils.version`. + * Fixed bug with `create_blank_frame` that throws error with gray frames: + + Implemented automatic output channel correction inside `create_blank_frame` function. + + Extended automatic output channel correction support to asyncio package. + * Implemented `RSTP` protocol validation as _demuxer_, since it's not a protocol but a demuxer. + * Removed redundant `logger_handler`, `mkdir_safe`, `retrieve_best_interpolation`, `capPropId` helper functions from asyncio package. Relatively imported helper functions from non-asyncio package. + * Removed unused `aiohttp` dependency. + * Removed `asctime` formatting from logging. + - [x] StreamGear: + * Fixed Multi-Bitrate HLS VOD streams: + + Re-implemented complete workflow for Multi-Bitrate HLS VOD streams. + + Extended support to both *Single-Source* and *Real-time Frames* Modes. + * Fixed bugs with audio-video mapping. + * Fixed master playlist not generating in output. + * Fixed improper `-seg_duration` value resulting in broken pipeline. + * Fixed expected aspect ratio not calculated correctly for additional streams. + * Fixed stream not terminating when provided input from external audio device. + * Fixed bugs related to external audio not mapped correctly in HLS format. + * Fixed OPUS audio fragments not supported with MP4 video in HLS. + * Fixed unsupported high audio bit-rate bug. + - [x] Setup.py: + * Fixed `latest_version` returning incorrect version for some PYPI packages. * Removed `latest_version` variable support from `simplejpeg`. - * Fixed minor typos in dependencies. - - [x] Setup_cfg: Replaced dashes with underscores to remove warnings. + * Fixed `streamlink` only supporting requests==2.25.1 on Windows. + * Removed all redundant dependencies like `colorama`, `aiofiles`, `aiohttp`. + * Fixed typos in dependencies. + - [x] Setup.cfg: + * Replaced dashes with underscores to remove warnings. + - [x] CI: + * Replaced buggy `starlette.TestClient` with `async-asgi-testclient` in WebGear_RTC + * Removed `run()` method and replaced with pure asyncio implementation. + * Added new `async-asgi-testclient` CI dependency. + * Fixed `fake_picamera` class logger calling `vidgear` imports prematurely before importing `picamera` class in tests. + + Implemented new `fake_picamera` class logger inherently with `logging` module. + + Moved `sys.module` logic for faking to `init.py`. + + Added `__init__.py` to ignore in Codecov. + * Fixed event loop closing prematurely while reloading: + + Internally disabled suspending event loop while reloading. + * Event Policy Loop patcher added for WebGear_RTC tests. + * Fixed `return_assets_path` path bug. + * Fixed typo in `TimeoutError` exception import. + * Fixed eventloop is already closed bug. + * Fixed eventloop bugs in Helper CI tests. + * Fixed several minor bugs related to new CI tests. + * Fixed bug in PiGear tests. - [x] Docs: * Fixed 404 page does not work outside the site root with mkdocs. * Fixed markdown files comments not stripped when converted to HTML. - * Fixed typos - + * Fixed missing heading in VideoGear. + * Typos in links and code comments fixed. + * Several minor tweaks and typos fixed. + * Fixed improper URLs/Hyperlinks and related typos. + * Fixed typos in usage examples. + * Fixed redundant properties in CSS. + * Fixed bugs in `mkdocs.yml`. + * Fixed docs contexts and typos. + * Fixed `stream.release()` missing in docs. + * Fixed several typos in code comments. + * Removed dead code from docs. + - [x] Refactored Code and reduced redundancy. + - [x] Fixed shutdown in `main.py`. + - [x] Fixed logging comments. ??? question "Pull Requests" * PR #210 * PR #215 + * PR #222 + * PR #223 + * PR #227 + * PR #231 + * PR #233 + * PR #237 + * PR #239 + * PR #243   diff --git a/docs/gears/camgear/advanced/source_params.md b/docs/gears/camgear/advanced/source_params.md index 01cb9287d..0c91b9cdb 100644 --- a/docs/gears/camgear/advanced/source_params.md +++ b/docs/gears/camgear/advanced/source_params.md @@ -20,17 +20,22 @@ limitations under the License. # Source Tweak Parameters for CamGear API -  +
+ Source Tweak Parameters +
## Overview -The [`options`](../../params/#options) dictionary parameter in CamGear, gives user the ability to alter various **Source Tweak Parameters** available within [OpenCV's VideoCapture Class](https://docs.opencv.org/master/d8/dfe/classcv_1_1VideoCapture.html#a57c0e81e83e60f36c83027dc2a188e80). These tweak parameters can be used to manipulate input source Camera-Device properties _(such as its brightness, saturation, size, iso, gain etc.)_ seamlessly. Thereby, All Source Tweak Parameters supported by CamGear API are disscussed in this document. +The [`options`](../../params/#options) dictionary parameter in CamGear gives user the ability to alter various parameters available within [OpenCV's VideoCapture Class](https://docs.opencv.org/master/d8/dfe/classcv_1_1VideoCapture.html#a57c0e81e83e60f36c83027dc2a188e80). + +These tweak parameters can be used to transform input Camera-Source properties _(such as its brightness, saturation, size, iso, gain etc.)_ seamlessly. All parameters supported by CamGear API are disscussed in this document.   -!!! quote "" - ### Exclusive CamGear Parameters +### Exclusive CamGear Parameters + +!!! quote "" In addition to Source Tweak Parameters, CamGear also provides some exclusive attributes for its [`options`](../../params/#options) dictionary parameters. These attributes are as follows: diff --git a/docs/gears/camgear/usage.md b/docs/gears/camgear/usage.md index f40d4f10f..67f8e9b05 100644 --- a/docs/gears/camgear/usage.md +++ b/docs/gears/camgear/usage.md @@ -66,7 +66,7 @@ stream.stop() ## Using Camgear with Streaming Websites -CamGear API provides direct support for piping video streams from various popular streaming services like [Twitch](https://www.twitch.tv/), [Livestream](https://livestream.com/), [Dailymotion](https://www.dailymotion.com/live), and [many more ➶](https://streamlink.github.io/plugin_matrix.html#plugins). All you have to do is to provide the desired Video's URL to its `source` parameter, and enable the [`stream_mode`](../params/#stream_mode) parameter. The complete usage example is as follows: +CamGear API provides direct support for piping video streams from various popular streaming services like [Twitch](https://www.twitch.tv/), [Vimeo](https://vimeo.com/), [Dailymotion](https://www.dailymotion.com), and [many more ➶](https://streamlink.github.io/plugin_matrix.html#plugins). All you have to do is to provide the desired Video's URL to its `source` parameter, and enable the [`stream_mode`](../params/#stream_mode) parameter. The complete usage example is as follows: !!! bug "Bug in OpenCV's FFmpeg" To workaround a [**FFmpeg bug**](https://github.com/abhiTronix/vidgear/issues/133#issuecomment-638263225) that causes video to freeze frequently, You must always use [GStreamer backend](../params/#backend) for Livestreams _(such as Twitch URLs)_. @@ -90,10 +90,10 @@ import cv2 options = {"STREAM_RESOLUTION": "720p"} # Add any desire Video URL as input source -# for e.g https://www.dailymotion.com/video/x7xsoud +# for e.g https://vimeo.com/151666798 # and enable Stream Mode (`stream_mode = True`) stream = CamGear( - source="https://www.dailymotion.com/video/x7xsoud", + source="https://vimeo.com/151666798", stream_mode=True, logging=True, **options @@ -186,7 +186,7 @@ stream.stop() ## Using CamGear with Variable Camera Properties -CamGear API also flexibly support various **Source Tweak Parameters** available within [OpenCV's VideoCapture API](https://docs.opencv.org/master/d4/d15/group__videoio__flags__base.html#gaeb8dd9c89c10a5c63c139bf7c4f5704d). These tweak parameters can be used to manipulate input source Camera-Device properties _(such as its brightness, saturation, size, iso, gain etc.)_ seamlessly, and can be easily applied in CamGear API through its `options` dictionary parameter by formatting them as its attributes. The complete usage example is as follows: +CamGear API also flexibly support various **Source Tweak Parameters** available within [OpenCV's VideoCapture API](https://docs.opencv.org/master/d4/d15/group__videoio__flags__base.html#gaeb8dd9c89c10a5c63c139bf7c4f5704d). These tweak parameters can be used to transform input source Camera-Device properties _(such as its brightness, saturation, size, iso, gain etc.)_ seamlessly, and can be easily applied in CamGear API through its `options` dictionary parameter by formatting them as its attributes. The complete usage example is as follows: !!! tip "All the supported Source Tweak Parameters can be found [here ➶](../advanced/source_params/#source-tweak-parameters-for-camgear-api)" @@ -301,4 +301,10 @@ cv2.destroyAllWindows() stream.stop() ``` +  + +## Bonus Examples + +!!! example "Checkout more advanced CamGear examples with unusual configuration [here ➶](../../../help/camgear_ex/)" +   \ No newline at end of file diff --git a/docs/gears/netgear/advanced/bidirectional_mode.md b/docs/gears/netgear/advanced/bidirectional_mode.md index 6ea1df99a..cfea17b39 100644 --- a/docs/gears/netgear/advanced/bidirectional_mode.md +++ b/docs/gears/netgear/advanced/bidirectional_mode.md @@ -36,7 +36,7 @@ This mode can be easily activated in NetGear through `bidirectional_mode` attrib   -!!! danger "Important" +!!! danger "Important Information regarding Bidirectional Mode" * In Bidirectional Mode, `zmq.PAIR`(ZMQ Pair) & `zmq.REQ/zmq.REP`(ZMQ Request/Reply) are **ONLY** Supported messaging patterns. Accessing this mode with any other messaging pattern, will result in `ValueError`. @@ -69,7 +69,7 @@ This mode can be easily activated in NetGear through `bidirectional_mode` attrib   -## Method Parameters +## Exclusive Parameters To send data bidirectionally, NetGear API provides two exclusive parameters for its methods: @@ -364,7 +364,7 @@ server.close() In this example we are going to implement a bare-minimum example, where we will be sending video-frames _(3-Dimensional numpy arrays)_ of the same Video bidirectionally at the same time, for testing the real-time performance and synchronization between the Server and the Client using this(Bidirectional) Mode. -!!! tip "This feature is great for building applications like Real-Time Video Chat." +!!! tip "This example is useful for building applications like Real-Time Video Chat." !!! info "We're also using [`reducer()`](../../../../../bonus/reference/helper/#vidgear.gears.helper.reducer--reducer) method for reducing frame-size on-the-go for additional performance." @@ -378,14 +378,13 @@ Open your favorite terminal and execute the following python code: ```python # import required libraries -from vidgear.gears import VideoGear from vidgear.gears import NetGear from vidgear.gears.helper import reducer import numpy as np import cv2 # open any valid video stream(for e.g `test.mp4` file) -stream = VideoGear(source="test.mp4").start() +stream = cv2.VideoCapture("test.mp4") # activate Bidirectional mode options = {"bidirectional_mode": True} @@ -398,10 +397,10 @@ while True: try: # read frames from stream - frame = stream.read() + (grabbed, frame) = stream.read() - # check for frame if Nonetype - if frame is None: + # check for frame if not grabbed + if not grabbed: break # reducer frames size if you want more performance, otherwise comment this line @@ -428,7 +427,7 @@ while True: break # safely close video stream -stream.stop() +stream.release() # safely close server server.close() @@ -445,7 +444,6 @@ Then open another terminal on the same system and execute the following python c ```python # import required libraries from vidgear.gears import NetGear -from vidgear.gears import VideoGear from vidgear.gears.helper import reducer import cv2 @@ -453,7 +451,7 @@ import cv2 options = {"bidirectional_mode": True} # again open the same video stream -stream = VideoGear(source="test.mp4").start() +stream = cv2.VideoCapture("test.mp4") # define NetGear Client with `receive_mode = True` and defined parameter client = NetGear(receive_mode=True, pattern=1, logging=True, **options) @@ -462,10 +460,10 @@ client = NetGear(receive_mode=True, pattern=1, logging=True, **options) while True: # read frames from stream - frame = stream.read() + (grabbed, frame) = stream.read() - # check for frame if Nonetype - if frame is None: + # check for frame if not grabbed + if not grabbed: break # reducer frames size if you want more performance, otherwise comment this line @@ -503,7 +501,7 @@ while True: cv2.destroyAllWindows() # safely close video stream -stream.stop() +stream.release() # safely close client client.close() diff --git a/docs/gears/netgear/advanced/compression.md b/docs/gears/netgear/advanced/compression.md index d09bf2070..6187ddef9 100644 --- a/docs/gears/netgear/advanced/compression.md +++ b/docs/gears/netgear/advanced/compression.md @@ -49,9 +49,9 @@ Frame Compression is enabled by default in NetGear, and can be easily controlled   -## Supported Attributes +## Exclusive Attributes -For implementing Frame Compression, NetGear API currently provide following attribute for its [`options`](../../params/#options) dictionary parameter to leverage performance with Frame Compression: +For implementing Frame Compression, NetGear API currently provide following exclusive attribute for its [`options`](../../params/#options) dictionary parameter to leverage performance with Frame Compression: * `jpeg_compression`: _(bool/str)_ This internal attribute is used to activate/deactivate JPEG Frame Compression as well as to specify incoming frames colorspace with compression. Its usage is as follows: @@ -475,14 +475,13 @@ Open your favorite terminal and execute the following python code: ```python # import required libraries -from vidgear.gears import VideoGear from vidgear.gears import NetGear from vidgear.gears.helper import reducer import numpy as np import cv2 # open any valid video stream(for e.g `test.mp4` file) -stream = VideoGear(source="test.mp4").start() +stream = cv2.VideoCapture("test.mp4") # activate Bidirectional mode and Frame Compression options = { @@ -501,10 +500,10 @@ while True: try: # read frames from stream - frame = stream.read() + (grabbed, frame) = stream.read() - # check for frame if Nonetype - if frame is None: + # check for frame if not grabbed + if not grabbed: break # reducer frames size if you want even more performance, otherwise comment this line @@ -531,7 +530,7 @@ while True: break # safely close video stream -stream.stop() +stream.release() # safely close server server.close() @@ -548,7 +547,6 @@ Then open another terminal on the same system and execute the following python c ```python # import required libraries from vidgear.gears import NetGear -from vidgear.gears import VideoGear from vidgear.gears.helper import reducer import cv2 @@ -562,7 +560,7 @@ options = { } # again open the same video stream -stream = VideoGear(source="test.mp4").start() +stream = cv2.VideoCapture("test.mp4") # define NetGear Client with `receive_mode = True` and defined parameter client = NetGear(receive_mode=True, pattern=1, logging=True, **options) @@ -571,10 +569,10 @@ client = NetGear(receive_mode=True, pattern=1, logging=True, **options) while True: # read frames from stream - frame = stream.read() + (grabbed, frame) = stream.read() - # check for frame if Nonetype - if frame is None: + # check for frame if not grabbed + if not grabbed: break # reducer frames size if you want even more performance, otherwise comment this line @@ -612,7 +610,7 @@ while True: cv2.destroyAllWindows() # safely close video stream -stream.stop() +stream.release() # safely close client client.close() diff --git a/docs/gears/netgear/advanced/multi_client.md b/docs/gears/netgear/advanced/multi_client.md index b8ab39be1..9c7e6016f 100644 --- a/docs/gears/netgear/advanced/multi_client.md +++ b/docs/gears/netgear/advanced/multi_client.md @@ -37,7 +37,7 @@ The supported patterns for this mode are Publish/Subscribe (`zmq.PUB/zmq.SUB`) a   -!!! danger "Multi-Clients Mode Requirements" +!!! danger "Important Information regarding Multi-Clients Mode" * A unique PORT address **MUST** be assigned to each Client on the network using its [`port`](../../params/#port) parameter. @@ -45,6 +45,8 @@ The supported patterns for this mode are Publish/Subscribe (`zmq.PUB/zmq.SUB`) a * Patterns `1` _(i.e. Request/Reply `zmq.REQ/zmq.REP`)_ and `2` _(i.e. Publish/Subscribe `zmq.PUB/zmq.SUB`)_ are the only supported pattern values for this Mode. Therefore, calling any other pattern value with is mode will result in `ValueError`. + * Multi-Clients and Multi-Servers exclusive modes **CANNOT** be enabled simultaneously, Otherwise NetGear API will throw `ValueError`. + * The [`address`](../../params/#address) parameter value of each Client **MUST** exactly match the Server.   @@ -71,12 +73,10 @@ The supported patterns for this mode are Publish/Subscribe (`zmq.PUB/zmq.SUB`) a ## Usage Examples -!!! alert "Important Information" +!!! alert "Important" * ==Frame/Data transmission will **NOT START** until all given Client(s) are connected to the Server.== - * Multi-Clients and Multi-Servers exclusive modes **CANNOT** be enabled simultaneously, Otherwise NetGear API will throw `ValueError`. - * For sake of simplicity, in these examples we will use only two unique Clients, but the number of these Clients can be extended to **SEVERAL** numbers depending upon your Network bandwidth and System Capabilities. @@ -85,7 +85,9 @@ The supported patterns for this mode are Publish/Subscribe (`zmq.PUB/zmq.SUB`) a ### Bare-Minimum Usage -In this example, we will capturing live video-frames from a source _(a.k.a Servers)_ with a webcam connected to it. Afterwards, those captured frame will be transferred over the network to a two independent system _(a.k.a Client)_ at the same time, and will be displayed in Output Window at real-time. All this by using this Multi-Clients Mode in NetGear API. +In this example, we will capturing live video-frames from a source _(a.k.a Server)_ with a webcam connected to it. Afterwards, those captured frame will be sent over the network to two independent system _(a.k.a Clients)_ using this Multi-Clients Mode in NetGear API. Finally, both Clients will be displaying recieved frames in Output Windows in real time. + +!!! tip "This example is useful for building applications like Real-Time Video Broadcasting to multiple clients in local network." #### Server's End diff --git a/docs/gears/netgear/advanced/multi_server.md b/docs/gears/netgear/advanced/multi_server.md index 8d6dc61c1..d0fa4caaa 100644 --- a/docs/gears/netgear/advanced/multi_server.md +++ b/docs/gears/netgear/advanced/multi_server.md @@ -35,7 +35,7 @@ The supported patterns for this mode are Publish/Subscribe (`zmq.PUB/zmq.SUB`) a   -!!! danger "Multi-Servers Mode Requirements" +!!! danger "Important Information regarding Multi-Servers Mode" * A unique PORT address **MUST** be assigned to each Server on the network using its [`port`](../../params/#port) parameter. @@ -43,6 +43,8 @@ The supported patterns for this mode are Publish/Subscribe (`zmq.PUB/zmq.SUB`) a * Patterns `1` _(i.e. Request/Reply `zmq.REQ/zmq.REP`)_ and `2` _(i.e. Publish/Subscribe `zmq.PUB/zmq.SUB`)_ are the only supported values for this Mode. Therefore, calling any other pattern value with is mode will result in `ValueError`. + * Multi-Servers and Multi-Clients exclusive modes **CANNOT** be enabled simultaneously, Otherwise NetGear API will throw `ValueError`. + * The [`address`](../../params/#address) parameter value of each Server **MUST** exactly match the Client.   @@ -68,23 +70,24 @@ The supported patterns for this mode are Publish/Subscribe (`zmq.PUB/zmq.SUB`) a ## Usage Examples -!!! alert "Important Information" +!!! alert "Example Assumptions" * For sake of simplicity, in these examples we will use only two unique Servers, but, the number of these Servers can be extended to several numbers depending upon your system hardware limits. - * All of Servers will be transferring frames to a single Client system at the same time, which will be displaying received frames as a montage _(multiple frames concatenated together)_. + * All of Servers will be transferring frames to a single Client system at the same time, which will be displaying received frames as a live montage _(multiple frames concatenated together)_. * For building Frames Montage at Client's end, We are going to use `imutils` python library function to build montages, by concatenating together frames recieved from different servers. Therefore, Kindly install this library with `pip install imutils` terminal command. - * Multi-Servers and Multi-Clients exclusive modes **CANNOT** be enabled simultaneously, Otherwise NetGear API will throw `ValueError`. -   ### Bare-Minimum Usage -In this example, we will capturing live video-frames on two independent sources _(a.k.a Servers)_, each with a webcam connected to it. Then, those frames will be transferred over the network to a single system _(a.k.a Client)_ at the same time, and will be displayed as a real-time montage. All this by using this Multi-Servers Mode in NetGear API. +In this example, we will capturing live video-frames on two independent sources _(a.k.a Servers)_, each with a webcam connected to it. Afterwards, these frames will be sent over the network to a single system _(a.k.a Client)_ using this Multi-Servers Mode in NetGear API in real time, and will be displayed as a live montage. + + +!!! tip "This example is useful for building applications like Real-Time Security System with multiple cameras." #### Client's End diff --git a/docs/gears/netgear/advanced/secure_mode.md b/docs/gears/netgear/advanced/secure_mode.md index 1fee3c798..200ebd92a 100644 --- a/docs/gears/netgear/advanced/secure_mode.md +++ b/docs/gears/netgear/advanced/secure_mode.md @@ -48,7 +48,7 @@ Secure mode supports the two most powerful ZMQ security layers:   -!!! danger "Secure Mode Requirements" +!!! danger "Important Information regarding Secure Mode" * The `secure_mode` attribute value at the Client's end **MUST** match exactly the Server's end _(i.e. **IronHouse** security layer is only compatible with **IronHouse**, and **NOT** with **StoneHouse**)_. @@ -83,9 +83,9 @@ Secure mode supports the two most powerful ZMQ security layers:   -## Supported Attributes +## Exclusive Attributes -For implementing Secure Mode, NetGear API currently provide following attribute for its [`options`](../../params/#options) dictionary parameter: +For implementing Secure Mode, NetGear API currently provide following exclusive attribute for its [`options`](../../params/#options) dictionary parameter: * `secure_mode` (_integer_) : This attribute activates and sets the ZMQ security Mechanism. Its possible values are: `1`(_StoneHouse_) & `2`(_IronHouse_), and its default value is `0`(_Grassland(no security)_). Its usage is as follows: diff --git a/docs/gears/netgear/advanced/ssh_tunnel.md b/docs/gears/netgear/advanced/ssh_tunnel.md index 150d79fa8..8b0d41a5c 100644 --- a/docs/gears/netgear/advanced/ssh_tunnel.md +++ b/docs/gears/netgear/advanced/ssh_tunnel.md @@ -80,12 +80,12 @@ SSH Tunnel Mode requires [`pexpect`](http://www.noah.org/wiki/pexpect) or [`para   -## Supported Attributes +## Exclusive Attributes !!! warning "All these attributes will work on Server end only whereas Client end will simply discard them." -For implementing SSH Tunneling Mode, NetGear API currently provide following attribute for its [`options`](../../params/#options) dictionary parameter: +For implementing SSH Tunneling Mode, NetGear API currently provide following exclusive attribute for its [`options`](../../params/#options) dictionary parameter: * **`ssh_tunnel_mode`** (_string_) : This attribute activates SSH Tunneling Mode and sets the fully specified `"@:"` SSH URL for tunneling at Server end. Its usage is as follows: @@ -138,7 +138,7 @@ For implementing SSH Tunneling Mode, NetGear API currently provide following att ## Usage Example -??? alert "Assumptions for this Example" +???+ alert "Assumptions for this Example" In this particular example, we assume that: diff --git a/docs/gears/netgear/usage.md b/docs/gears/netgear/usage.md index 95d310bd0..e2aaa2d14 100644 --- a/docs/gears/netgear/usage.md +++ b/docs/gears/netgear/usage.md @@ -471,4 +471,10 @@ stream.stop() server.close() ``` -  \ No newline at end of file +  + +## Bonus Examples + +!!! example "Checkout more advanced NetGear examples with unusual configuration [here ➶](../../../help/netgear_ex/)" + +  \ No newline at end of file diff --git a/docs/gears/netgear_async/advanced/bidirectional_mode.md b/docs/gears/netgear_async/advanced/bidirectional_mode.md index 923372156..0341f372b 100644 --- a/docs/gears/netgear_async/advanced/bidirectional_mode.md +++ b/docs/gears/netgear_async/advanced/bidirectional_mode.md @@ -219,150 +219,6 @@ if __name__ == "__main__":   -### Bare-Minimum Usage with VideoGear - -Following is another comparatively faster Bidirectional Mode bare-minimum example over Custom Source Server built using multi-threaded [VideoGear](../../../videogear/overview/) _(instead of OpenCV)_ and NetGear_Async API: - -#### Server End - -Open your favorite terminal and execute the following python code: - -!!! tip "You can terminate both sides anytime by pressing ++ctrl+"C"++ on your keyboard!" - -```python -# import library -from vidgear.gears.asyncio import NetGear_Async -from vidgear.gears import VideoGear -import cv2, asyncio - -# activate Bidirectional mode -options = {"bidirectional_mode": True} - -# initialize Server without any source -server = NetGear_Async(source=None, logging=True, **options) - -# Create a async frame generator as custom source -async def my_frame_generator(): - - # !!! define your own video source here !!! - # Open any valid video stream(for e.g `foo.mp4` file) - stream = VideoGear(source="foo.mp4").start() - - # loop over stream until its terminated - while True: - # read frames - frame = stream.read() - - # check for frame if Nonetype - if frame is None: - break - - # {do something with the frame to be sent here} - - # prepare data to be sent(a simple text in our case) - target_data = "Hello, I am a Server." - - # receive data from Client - recv_data = await server.transceive_data() - - # print data just received from Client - if not (recv_data is None): - print(recv_data) - - # send our frame & data - yield (target_data, frame) - - # sleep for sometime - await asyncio.sleep(0) - - # safely close video stream - stream.stop() - - -if __name__ == "__main__": - # set event loop - asyncio.set_event_loop(server.loop) - # Add your custom source generator to Server configuration - server.config["generator"] = my_frame_generator() - # Launch the Server - server.launch() - try: - # run your main function task until it is complete - server.loop.run_until_complete(server.task) - except (KeyboardInterrupt, SystemExit): - # wait for interrupts - pass - finally: - # finally close the server - server.close() -``` - -#### Client End - -Then open another terminal on the same system and execute the following python code and see the output: - -!!! tip "You can terminate client anytime by pressing ++ctrl+"C"++ on your keyboard!" - -```python -# import libraries -from vidgear.gears.asyncio import NetGear_Async -import cv2, asyncio - -# activate Bidirectional mode -options = {"bidirectional_mode": True} - -# define and launch Client with `receive_mode=True` -client = NetGear_Async(receive_mode=True, logging=True, **options).launch() - -# Create a async function where you want to show/manipulate your received frames -async def main(): - # loop over Client's Asynchronous Frame Generator - async for (data, frame) in client.recv_generator(): - - # do something with receive data from server - if not (data is None): - # let's print it - print(data) - - # {do something with received frames here} - - # Show output window(comment these lines if not required) - cv2.imshow("Output Frame", frame) - cv2.waitKey(1) & 0xFF - - # prepare data to be sent - target_data = "Hi, I am a Client here." - - # send our data to server - await client.transceive_data(data=target_data) - - # await before continuing - await asyncio.sleep(0) - - -if __name__ == "__main__": - # Set event loop to client's - asyncio.set_event_loop(client.loop) - try: - # run your main function task until it is complete - client.loop.run_until_complete(main()) - except (KeyboardInterrupt, SystemExit): - # wait for interrupts - pass - - # close all output window - cv2.destroyAllWindows() - - # safely close client - client.close() -``` - -  - -  - - - ### Using Bidirectional Mode with Variable Parameters diff --git a/docs/gears/netgear_async/overview.md b/docs/gears/netgear_async/overview.md index 162ff784a..fc2e505cf 100644 --- a/docs/gears/netgear_async/overview.md +++ b/docs/gears/netgear_async/overview.md @@ -30,7 +30,7 @@ limitations under the License. NetGear_Async is built on [`zmq.asyncio`](https://pyzmq.readthedocs.io/en/latest/api/zmq.asyncio.html), and powered by a high-performance asyncio event loop called [**`uvloop`**](https://github.com/MagicStack/uvloop) to achieve unmatchable high-speed and lag-free video streaming over the network with minimal resource constraints. NetGear_Async can transfer thousands of frames in just a few seconds without causing any significant load on your system. -NetGear_Async provides complete server-client handling and options to use variable protocols/patterns similar to [NetGear API](../../netgear/overview/). Furthermore, NetGear_Async allows us to define our custom Server as source to manipulate frames easily before sending them across the network(see this [doc](../usage/#using-netgear_async-with-a-custom-sourceopencv) example). +NetGear_Async provides complete server-client handling and options to use variable protocols/patterns similar to [NetGear API](../../netgear/overview/). Furthermore, NetGear_Async allows us to define our custom Server as source to transform frames easily before sending them across the network(see this [doc](../usage/#using-netgear_async-with-a-custom-sourceopencv) example). NetGear_Async now supports additional [**bidirectional data transmission**](../advanced/bidirectional_mode) between receiver(client) and sender(server) while transferring frames. Users can easily build complex applications such as like [Real-Time Video Chat](../advanced/bidirectional_mode/#using-bidirectional-mode-for-video-frames-transfer) in just few lines of code. diff --git a/docs/gears/netgear_async/usage.md b/docs/gears/netgear_async/usage.md index f0a123657..63323b049 100644 --- a/docs/gears/netgear_async/usage.md +++ b/docs/gears/netgear_async/usage.md @@ -223,7 +223,7 @@ if __name__ == "__main__": ## Using NetGear_Async with a Custom Source(OpenCV) -NetGear_Async allows you to easily define your own custom Source at Server-end that you want to use to manipulate your frames before sending them onto the network. +NetGear_Async allows you to easily define your own custom Source at Server-end that you want to use to transform your frames before sending them onto the network. Let's implement a bare-minimum example with a Custom Source using NetGear_Async API and OpenCV: @@ -241,14 +241,14 @@ import cv2, asyncio # initialize Server without any source server = NetGear_Async(source=None, logging=True) +# !!! define your own video source here !!! +# Open any video stream such as live webcam +# video stream on first index(i.e. 0) device +stream = cv2.VideoCapture(0) + # Create a async frame generator as custom source async def my_frame_generator(): - # !!! define your own video source here !!! - # Open any video stream such as live webcam - # video stream on first index(i.e. 0) device - stream = cv2.VideoCapture(0) - # loop over stream until its terminated while True: @@ -265,9 +265,6 @@ async def my_frame_generator(): yield frame # sleep for sometime await asyncio.sleep(0) - - # close stream - stream.release() if __name__ == "__main__": @@ -284,6 +281,8 @@ if __name__ == "__main__": # wait for interrupts pass finally: + # close stream + stream.release() # finally close the server server.close() ``` @@ -375,6 +374,7 @@ if __name__ == "__main__": ``` ### Client's End + Then open another terminal on the same system and execute the following python code and see the output: !!! warning "Client will throw TimeoutError if it fails to connect to the Server in given [`timeout`](../params/#timeout) value!" @@ -429,4 +429,10 @@ if __name__ == "__main__": writer.close() ``` -  \ No newline at end of file +  + +## Bonus Examples + +!!! example "Checkout more advanced NetGear_Async examples with unusual configuration [here ➶](../../../help/netgear_async_ex/)" + +  \ No newline at end of file diff --git a/docs/gears/pigear/usage.md b/docs/gears/pigear/usage.md index 7b9827685..78ec04348 100644 --- a/docs/gears/pigear/usage.md +++ b/docs/gears/pigear/usage.md @@ -270,4 +270,10 @@ stream.stop() writer.close() ``` -  \ No newline at end of file +  + +## Bonus Examples + +!!! example "Checkout more advanced PiGear examples with unusual configuration [here ➶](../../../help/pigear_ex/)" + +  \ No newline at end of file diff --git a/docs/gears/screengear/usage.md b/docs/gears/screengear/usage.md index c8ccfa8ec..9dd7c6ce3 100644 --- a/docs/gears/screengear/usage.md +++ b/docs/gears/screengear/usage.md @@ -122,7 +122,7 @@ from vidgear.gears import ScreenGear import cv2 # open video stream with defined parameters with monitor at index `1` selected -stream = ScreenGear(monitor=1, logging=True, **options).start() +stream = ScreenGear(monitor=1, logging=True).start() # loop over while True: @@ -167,7 +167,7 @@ from vidgear.gears import ScreenGear import cv2 # open video stream with defined parameters and `mss` backend for extracting frames. -stream = ScreenGear(backend="mss", logging=True, **options).start() +stream = ScreenGear(backend="mss", logging=True).start() # loop over while True: @@ -321,4 +321,10 @@ stream.stop() writer.close() ``` -  \ No newline at end of file +  + +## Bonus Examples + +!!! example "Checkout more advanced NetGear examples with unusual configuration [here ➶](../../../help/screengear_ex/)" + +  \ No newline at end of file diff --git a/docs/gears/stabilizer/usage.md b/docs/gears/stabilizer/usage.md index 65167fe37..acd7ca2ae 100644 --- a/docs/gears/stabilizer/usage.md +++ b/docs/gears/stabilizer/usage.md @@ -67,7 +67,7 @@ while True: if stabilized_frame is None: continue - # {do something with the stabilized_frame frame here} + # {do something with the stabilized frame here} # Show output window cv2.imshow("Output Stabilized Frame", stabilized_frame) @@ -121,7 +121,7 @@ while True: if stabilized_frame is None: continue - # {do something with the frame here} + # {do something with the stabilized frame here} # Show output window cv2.imshow("Stabilized Frame", stabilized_frame) @@ -145,7 +145,7 @@ stream.release() ## Using Stabilizer with Variable Parameters -Stabilizer class provide certain [parameters](../params/) which you can use to manipulate its internal properties. The complete usage example is as follows: +Stabilizer class provide certain [parameters](../params/) which you can use to tweak its internal properties. The complete usage example is as follows: ```python # import required libraries @@ -176,7 +176,7 @@ while True: if stabilized_frame is None: continue - # {do something with the stabilized_frame frame here} + # {do something with the stabilized frame here} # Show output window cv2.imshow("Output Stabilized Frame", stabilized_frame) @@ -203,6 +203,8 @@ stream.stop() VideoGear's stabilizer can be used in conjunction with WriteGear API directly without any compatibility issues. The complete usage example is as follows: +!!! tip "You can also add live audio input to WriteGear pipeline. See this [bonus example](../../../help)" + ```python # import required libraries from vidgear.gears.stabilizer import Stabilizer @@ -236,7 +238,7 @@ while True: if stabilized_frame is None: continue - # {do something with the frame here} + # {do something with the stabilized frame here} # write stabilized frame to writer writer.write(stabilized_frame) @@ -271,4 +273,10 @@ writer.close() !!! example "The complete usage example can be found [here ➶](../../videogear/usage/#using-videogear-with-video-stabilizer-backend)" +  + +## Bonus Examples + +!!! example "Checkout more advanced Stabilizer examples with unusual configuration [here ➶](../../../help/stabilizer_ex/)" +   \ No newline at end of file diff --git a/docs/gears/streamgear/introduction.md b/docs/gears/streamgear/introduction.md index 205e73f0c..b460a41ee 100644 --- a/docs/gears/streamgear/introduction.md +++ b/docs/gears/streamgear/introduction.md @@ -39,7 +39,7 @@ SteamGear currently supports [**MPEG-DASH**](https://www.encoding.com/mpeg-dash/ SteamGear also creates a Manifest file _(such as MPD in-case of DASH)_ or a Master Playlist _(such as M3U8 in-case of Apple HLS)_ besides segments that describe these segment information _(timing, URL, media characteristics like video resolution and adaptive bit rates)_ and is provided to the client before the streaming session. -!!! tip "For streaming with older traditional protocols such as RTMP, RTSP/RTP you could use [WriteGear](../../writegear/introduction/) API instead." +!!! alert "For streaming with older traditional protocols such as RTMP, RTSP/RTP you could use [WriteGear](../../writegear/introduction/) API instead."   @@ -52,10 +52,17 @@ SteamGear also creates a Manifest file _(such as MPD in-case of DASH)_ or a Mast * StreamGear **MUST** requires FFmpeg executables for its core operations. Follow these dedicated [Platform specific Installation Instructions ➶](../ffmpeg_install/) for its installation. - * :warning: StreamGear API will throw **RuntimeError**, if it fails to detect valid FFmpeg executables on your system. + * :warning: StreamGear API will throw **RuntimeError**, if it fails to detect valid FFmpeg executable on your system. * It is advised to enable logging _([`logging=True`](../params/#logging))_ on the first run for easily identifying any runtime errors. +!!! tip "Useful Links" + + - Checkout [this detailed blogpost](https://ottverse.com/mpeg-dash-video-streaming-the-complete-guide/) on how MPEG-DASH works. + - Checkout [this detailed blogpost](https://ottverse.com/hls-http-live-streaming-how-does-it-work/) on how HLS works. + - Checkout [this detailed blogpost](https://ottverse.com/hls-http-live-streaming-how-does-it-work/) for HLS vs. MPEG-DASH comparison. + +   ## Mode of Operations @@ -68,9 +75,9 @@ StreamGear primarily operates in following independent modes for transcoding: Rather, you can enable live-streaming in Real-time Frames Mode by using using exclusive [`-livestream`](../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter in StreamGear API. Checkout [this usage example](../rtfm/usage/#bare-minimum-usage-with-live-streaming) for more information. -- [**Single-Source Mode**](../ssm/overview): In this mode, StreamGear transcodes entire video/audio file _(as opposed to frames by frame)_ into a sequence of multiple smaller chunks/segments for streaming. This mode works exceptionally well, when you're transcoding lossless long-duration videos(with audio) for streaming and required no extra efforts or interruptions. But on the downside, the provided source cannot be changed or manipulated before sending onto FFmpeg Pipeline for processing. +- [**Single-Source Mode**](../ssm/overview): In this mode, StreamGear **transcodes entire video file** _(as opposed to frame-by-frame)_ into a sequence of multiple smaller chunks/segments for streaming. This mode works exceptionally well when you're transcoding long-duration lossless videos(with audio) for streaming that required no interruptions. But on the downside, the provided source cannot be flexibly manipulated or transformed before sending onto FFmpeg Pipeline for processing. -- [**Real-time Frames Mode**](../rtfm/overview): In this mode, StreamGear directly transcodes video-frames _(as opposed to a entire file)_, into a sequence of multiple smaller chunks/segments for streaming. In this mode, StreamGear supports real-time [`numpy.ndarray`](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) frames, and process them over FFmpeg pipeline. But on the downside, audio has to added manually _(as separate source)_ for streams. +- [**Real-time Frames Mode**](../rtfm/overview): In this mode, StreamGear directly **transcodes frame-by-frame** _(as opposed to a entire video file)_, into a sequence of multiple smaller chunks/segments for streaming. This mode works exceptionally well when you desire to flexibility manipulate or transform [`numpy.ndarray`](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) frames in real-time before sending them onto FFmpeg Pipeline for processing. But on the downside, audio has to added manually _(as separate source)_ for streams.   @@ -125,15 +132,6 @@ from vidgear.gears import StreamGear ## Recommended Players -!!! tip "Useful Links" - - - Checkout [this detailed blogpost](https://ottverse.com/mpeg-dash-video-streaming-the-complete-guide/) on how MPEG-DASH works. - - - Checkout [this detailed blogpost](https://ottverse.com/hls-http-live-streaming-how-does-it-work/) on how HLS works. - - - Checkout [this detailed blogpost](https://ottverse.com/hls-http-live-streaming-how-does-it-work/) for HLS vs. MPEG-DASH comparsion. - - === "GUI Players" - [x] **[MPV Player](https://mpv.io/):** _(recommended)_ MPV is a free, open source, and cross-platform media player. It supports a wide variety of media file formats, audio and video codecs, and subtitle types. - [x] **[VLC Player](https://www.videolan.org/vlc/releases/3.0.0.html):** VLC is a free and open source cross-platform multimedia player and framework that plays most multimedia files as well as DVDs, Audio CDs, VCDs, and various streaming protocols. @@ -172,4 +170,10 @@ from vidgear.gears import StreamGear See here 🚀 -  \ No newline at end of file +  + +## Bonus Examples + +!!! example "Checkout more advanced StreamGear examples with unusual configuration [here ➶](../../../help/streamgear_ex/)" + +  \ No newline at end of file diff --git a/docs/gears/streamgear/rtfm/overview.md b/docs/gears/streamgear/rtfm/overview.md index d1c07ab82..0f8649233 100644 --- a/docs/gears/streamgear/rtfm/overview.md +++ b/docs/gears/streamgear/rtfm/overview.md @@ -29,13 +29,13 @@ limitations under the License. ## Overview -When no valid input is received on [`-video_source`](../../params/#a-exclusive-parameters) attribute of [`stream_params`](../../params/#supported-parameters) dictionary parameter, StreamGear API activates this mode where it directly transcodes real-time [`numpy.ndarray`](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) video-frames _(as opposed to a entire file)_ into a sequence of multiple smaller chunks/segments for streaming. +When no valid input is received on [`-video_source`](../../params/#a-exclusive-parameters) attribute of [`stream_params`](../../params/#supported-parameters) dictionary parameter, StreamGear API activates this mode where it directly transcodes real-time [`numpy.ndarray`](https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html#numpy-ndarray) video-frames _(as opposed to a entire video file)_ into a sequence of multiple smaller chunks/segments for adaptive streaming. -SteamGear supports both [**MPEG-DASH**](https://www.encoding.com/mpeg-dash/) _(Dynamic Adaptive Streaming over HTTP, ISO/IEC 23009-1)_ and [**Apple HLS**](https://developer.apple.com/documentation/http_live_streaming) _(HTTP Live Streaming)_ with this mode. +This mode works exceptionally well when you desire to flexibility manipulate or transform video-frames in real-time before sending them onto FFmpeg Pipeline for processing. But on the downside, StreamGear **DOES NOT** automatically maps video-source's audio to generated streams with this mode. You need to manually assign separate audio-source through [`-audio`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter. -In this mode, StreamGear **DOES NOT** automatically maps video-source audio to generated streams. You need to manually assign separate audio-source through [`-audio`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter. +SteamGear supports both [**MPEG-DASH**](https://www.encoding.com/mpeg-dash/) _(Dynamic Adaptive Streaming over HTTP, ISO/IEC 23009-1)_ and [**Apple HLS**](https://developer.apple.com/documentation/http_live_streaming) _(HTTP Live Streaming)_ with this mode. -This mode provide [`stream()`](../../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.stream) function for directly trancoding video-frames into streamable chunks over the FFmpeg pipeline. +For this mode, StreamGear API provides exclusive [`stream()`](../../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.stream) method for directly trancoding video-frames into streamable chunks.   diff --git a/docs/gears/streamgear/rtfm/usage.md b/docs/gears/streamgear/rtfm/usage.md index 8b0d34a3f..6008a65a7 100644 --- a/docs/gears/streamgear/rtfm/usage.md +++ b/docs/gears/streamgear/rtfm/usage.md @@ -155,7 +155,7 @@ You can easily activate ==Low-latency Livestreaming in Real-time Frames Mode==, !!! tip "Use `-window_size` & `-extra_window_size` FFmpeg parameters for controlling number of frames to be kept in Chunks. Less these value, less will be latency." -!!! warning "All Chunks will be overwritten in this mode after every few Chunks _(equal to the sum of `-window_size` & `-extra_window_size` values)_, Hence Newer Chunks and Manifest contains NO information of any older video-frames." +!!! alert "After every few chunks _(equal to the sum of `-window_size` & `-extra_window_size` values)_, all chunks will be overwritten in Live-Streaming. Thereby, since newer chunks in manifest/playlist will contain NO information of any older ones, and therefore resultant DASH/HLS stream will play only the most recent frames." !!! note "In this mode, StreamGear **DOES NOT** automatically maps video-source audio to generated streams. You need to manually assign separate audio-source through [`-audio`](../../params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter." diff --git a/docs/gears/streamgear/ssm/overview.md b/docs/gears/streamgear/ssm/overview.md index 99d985a04..b71ba4084 100644 --- a/docs/gears/streamgear/ssm/overview.md +++ b/docs/gears/streamgear/ssm/overview.md @@ -21,18 +21,20 @@ limitations under the License. # StreamGear API: Single-Source Mode
- StreamGear Flow Diagram -
StreamGear API's generalized workflow
+ Single-Source Mode Flow Diagram +
Single-Source Mode generalized workflow
## Overview -In this mode, StreamGear transcodes entire video/audio file _(as opposed to frames by frame)_ into a sequence of multiple smaller chunks/segments for streaming. This mode works exceptionally well, when you're transcoding lossless long-duration videos(with audio) for streaming and required no extra efforts or interruptions. But on the downside, the provided source cannot be changed or manipulated before sending onto FFmpeg Pipeline for processing. +In this mode, StreamGear transcodes entire audio-video file _(as opposed to frames-by-frame)_ into a sequence of multiple smaller chunks/segments for adaptive streaming. + +This mode works exceptionally well when you're transcoding long-duration lossless videos(with audio) files for streaming that requires no interruptions. But on the downside, the provided source cannot be flexibly manipulated or transformed before sending onto FFmpeg Pipeline for processing. SteamGear supports both [**MPEG-DASH**](https://www.encoding.com/mpeg-dash/) _(Dynamic Adaptive Streaming over HTTP, ISO/IEC 23009-1)_ and [**Apple HLS**](https://developer.apple.com/documentation/http_live_streaming) _(HTTP Live Streaming)_ with this mode. -This mode provide [`transcode_source()`](../../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.transcode_source) function to process audio-video files into streamable chunks. +For this mode, StreamGear API provides exclusive [`transcode_source()`](../../../../bonus/reference/streamgear/#vidgear.gears.streamgear.StreamGear.transcode_source) method to easily process audio-video files into streamable chunks. This mode can be easily activated by assigning suitable video path as input to [`-video_source`](../../params/#a-exclusive-parameters) attribute of [`stream_params`](../../params/#stream_params) dictionary parameter, during StreamGear initialization. diff --git a/docs/gears/streamgear/ssm/usage.md b/docs/gears/streamgear/ssm/usage.md index 4756f41c4..1db663992 100644 --- a/docs/gears/streamgear/ssm/usage.md +++ b/docs/gears/streamgear/ssm/usage.md @@ -82,7 +82,7 @@ You can easily activate ==Low-latency Livestreaming in Single-Source Mode==, whe !!! tip "Use `-window_size` & `-extra_window_size` FFmpeg parameters for controlling number of frames to be kept in Chunks. Less these value, less will be latency." -!!! warning "All Chunks will be overwritten in this mode after every few Chunks _(equal to the sum of `-window_size` & `-extra_window_size` values)_, Hence Newer Chunks and Manifest contains NO information of any older video-frames." +!!! alert "After every few chunks _(equal to the sum of `-window_size` & `-extra_window_size` values)_, all chunks will be overwritten in Live-Streaming. Thereby, since newer chunks in manifest/playlist will contain NO information of any older ones, and therefore resultant DASH/HLS stream will play only the most recent frames." !!! note "If input video-source _(i.e. `-video_source`)_ contains any audio stream/channel, then it automatically gets mapped to all generated streams without any extra efforts." diff --git a/docs/gears/videogear/usage.md b/docs/gears/videogear/usage.md index b02541e34..3f41d1ab8 100644 --- a/docs/gears/videogear/usage.md +++ b/docs/gears/videogear/usage.md @@ -274,4 +274,10 @@ cv2.destroyAllWindows() stream.stop() ``` +  + +## Bonus Examples + +!!! example "Checkout more advanced VideoGear examples with unusual configuration [here ➶](../../../help/videogear_ex/)" +   \ No newline at end of file diff --git a/docs/gears/webgear/advanced.md b/docs/gears/webgear/advanced.md index a29751622..c0a735366 100644 --- a/docs/gears/webgear/advanced.md +++ b/docs/gears/webgear/advanced.md @@ -42,7 +42,7 @@ Let's implement a bare-minimum example using WebGear, where we will be sending [ ```python # import required libraries import uvicorn -from vidgear.gears.asyncio import WebGear_RTC +from vidgear.gears.asyncio import WebGear # various performance tweaks and enable grayscale input options = { @@ -53,8 +53,8 @@ options = { "jpeg_compression_fastupsample": True, } -# initialize WebGear_RTC app and change its colorspace to grayscale -web = WebGear_RTC( +# initialize WebGear app and change its colorspace to grayscale +web = WebGear( source="foo.mp4", colorspace="COLOR_BGR2GRAY", logging=True, **options ) @@ -74,7 +74,7 @@ web.shutdown() !!! new "New in v0.2.1" This example was added in `v0.2.1`. -WebGear allows you to easily define your own custom Source that you want to use to manipulate your frames before sending them onto the browser. +WebGear allows you to easily define your own custom Source that you want to use to transform your frames before sending them onto the browser. !!! warning "JPEG Frame-Compression and all of its [performance enhancing attributes](../usage/#performance-enhancements) are disabled with a Custom Source!" @@ -108,7 +108,7 @@ async def my_frame_producer(): # do something with your OpenCV frame here # reducer frames size if you want more performance otherwise comment this line - frame = await reducer(frame, percentage=30, interpolation=cv2.INTER_LINEAR) # reduce frame by 30% + frame = await reducer(frame, percentage=30, interpolation=cv2.INTER_AREA) # reduce frame by 30% # handle JPEG encoding encodedImage = cv2.imencode(".jpg", frame)[1].tobytes() # yield frame in byte format @@ -248,7 +248,7 @@ WebGear natively supports ASGI middleware classes with Starlette for implementin !!! new "New in v0.2.2" This example was added in `v0.2.2`. -!!! info "All supported middlewares can be [here ➶](https://www.starlette.io/middleware/)" +!!! info "All supported middlewares can be found [here ➶](https://www.starlette.io/middleware/)" For this example, let's use [`CORSMiddleware`](https://www.starlette.io/middleware/#corsmiddleware) for implementing appropriate [CORS headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) to outgoing responses in our application in order to allow cross-origin requests from browsers, as follows: @@ -314,75 +314,8 @@ WebGear gives us complete freedom of altering data files generated in [**Auto-Ge   -## Bonus Usage Examples - -Because of WebGear API's flexible internal wapper around [VideoGear](../../videogear/overview/), it can easily access any parameter of [CamGear](#camgear) and [PiGear](#pigear) videocapture APIs. - -!!! info "Following usage examples are just an idea of what can be done with WebGear API, you can try various [VideoGear](../../videogear/params/), [CamGear](../../camgear/params/) and [PiGear](../../pigear/params/) parameters directly in WebGear API in the similar manner." - -### Using WebGear with Pi Camera Module - -Here's a bare-minimum example of using WebGear API with the Raspberry Pi camera module while tweaking its various properties in just one-liner: - -```python -# import libs -import uvicorn -from vidgear.gears.asyncio import WebGear - -# various webgear performance and Raspberry Pi camera tweaks -options = { - "frame_size_reduction": 40, - "jpeg_compression_quality": 80, - "jpeg_compression_fastdct": True, - "jpeg_compression_fastupsample": False, - "hflip": True, - "exposure_mode": "auto", - "iso": 800, - "exposure_compensation": 15, - "awb_mode": "horizon", - "sensor_mode": 0, -} - -# initialize WebGear app -web = WebGear( - enablePiCamera=True, resolution=(640, 480), framerate=60, logging=True, **options -) - -# run this app on Uvicorn server at address http://localhost:8000/ -uvicorn.run(web(), host="localhost", port=8000) - -# close app safely -web.shutdown() -``` - -  - -### Using WebGear with real-time Video Stabilization enabled - -Here's an example of using WebGear API with real-time Video Stabilization enabled: - -```python -# import libs -import uvicorn -from vidgear.gears.asyncio import WebGear - -# various webgear performance tweaks -options = { - "frame_size_reduction": 40, - "jpeg_compression_quality": 80, - "jpeg_compression_fastdct": True, - "jpeg_compression_fastupsample": False, -} - -# initialize WebGear app with a raw source and enable video stabilization(`stabilize=True`) -web = WebGear(source="foo.mp4", stabilize=True, logging=True, **options) +## Bonus Examples -# run this app on Uvicorn server at address http://localhost:8000/ -uvicorn.run(web(), host="localhost", port=8000) - -# close app safely -web.shutdown() -``` +!!! example "Checkout more advanced WebGear examples with unusual configuration [here ➶](../../../help/webgear_ex/)"   - \ No newline at end of file diff --git a/docs/gears/webgear_rtc/advanced.md b/docs/gears/webgear_rtc/advanced.md index 99e094c61..2726cdc64 100644 --- a/docs/gears/webgear_rtc/advanced.md +++ b/docs/gears/webgear_rtc/advanced.md @@ -64,7 +64,7 @@ web.shutdown() ## Using WebGear_RTC with a Custom Source(OpenCV) -WebGear_RTC allows you to easily define your own Custom Media Server with a custom source that you want to use to manipulate your frames before sending them onto the browser. +WebGear_RTC allows you to easily define your own Custom Media Server with a custom source that you want to use to transform your frames before sending them onto the browser. Let's implement a bare-minimum example with a Custom Source using WebGear_RTC API and OpenCV: @@ -77,6 +77,7 @@ Let's implement a bare-minimum example with a Custom Source using WebGear_RTC AP import uvicorn, asyncio, cv2 from av import VideoFrame from aiortc import VideoStreamTrack +from aiortc.mediastreams import MediaStreamError from vidgear.gears.asyncio import WebGear_RTC from vidgear.gears.asyncio.helper import reducer @@ -112,7 +113,7 @@ class Custom_RTCServer(VideoStreamTrack): # if NoneType if not grabbed: - return None + return MediaStreamError # reducer frames size if you want more performance otherwise comment this line frame = await reducer(frame, percentage=30) # reduce frame by 30% @@ -145,7 +146,6 @@ uvicorn.run(web(), host="localhost", port=8000) # close app safely web.shutdown() - ``` **And that's all, Now you can see output at [`http://localhost:8000/`](http://localhost:8000/) address.** @@ -262,7 +262,7 @@ WebGear_RTC also natively supports ASGI middleware classes with Starlette for im !!! new "New in v0.2.2" This example was added in `v0.2.2`. -!!! info "All supported middlewares can be [here ➶](https://www.starlette.io/middleware/)" +!!! info "All supported middlewares can be found [here ➶](https://www.starlette.io/middleware/)" For this example, let's use [`CORSMiddleware`](https://www.starlette.io/middleware/#corsmiddleware) for implementing appropriate [CORS headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) to outgoing responses in our application in order to allow cross-origin requests from browsers, as follows: @@ -326,69 +326,8 @@ WebGear_RTC gives us complete freedom of altering data files generated in [**Aut   -## Bonus Usage Examples - -Because of WebGear_RTC API's flexible internal wapper around [VideoGear](../../videogear/overview/), it can easily access any parameter of [CamGear](#camgear) and [PiGear](#pigear) videocapture APIs. - -!!! info "Following usage examples are just an idea of what can be done with WebGear_RTC API, you can try various [VideoGear](../../videogear/params/), [CamGear](../../camgear/params/) and [PiGear](../../pigear/params/) parameters directly in WebGear_RTC API in the similar manner." - -### Using WebGear_RTC with Pi Camera Module - -Here's a bare-minimum example of using WebGear_RTC API with the Raspberry Pi camera module while tweaking its various properties in just one-liner: - -```python -# import libs -import uvicorn -from vidgear.gears.asyncio import WebGear_RTC - -# various webgear_rtc performance and Raspberry Pi camera tweaks -options = { - "frame_size_reduction": 25, - "hflip": True, - "exposure_mode": "auto", - "iso": 800, - "exposure_compensation": 15, - "awb_mode": "horizon", - "sensor_mode": 0, -} - -# initialize WebGear_RTC app -web = WebGear_RTC( - enablePiCamera=True, resolution=(640, 480), framerate=60, logging=True, **options -) - -# run this app on Uvicorn server at address http://localhost:8000/ -uvicorn.run(web(), host="localhost", port=8000) - -# close app safely -web.shutdown() -``` - -  - -### Using WebGear_RTC with real-time Video Stabilization enabled - -Here's an example of using WebGear_RTC API with real-time Video Stabilization enabled: +## Bonus Examples -```python -# import libs -import uvicorn -from vidgear.gears.asyncio import WebGear_RTC +!!! example "Checkout more advanced WebGear_RTC examples with unusual configuration [here ➶](../../../help/webgear_rtc_ex/)" -# various webgear_rtc performance tweaks -options = { - "frame_size_reduction": 25, -} - -# initialize WebGear_RTC app with a raw source and enable video stabilization(`stabilize=True`) -web = WebGear_RTC(source="foo.mp4", stabilize=True, logging=True, **options) - -# run this app on Uvicorn server at address http://localhost:8000/ -uvicorn.run(web(), host="localhost", port=8000) - -# close app safely -web.shutdown() -``` - -  - \ No newline at end of file +  \ No newline at end of file diff --git a/docs/gears/webgear_rtc/overview.md b/docs/gears/webgear_rtc/overview.md index df6d7cb39..d7087f9aa 100644 --- a/docs/gears/webgear_rtc/overview.md +++ b/docs/gears/webgear_rtc/overview.md @@ -34,7 +34,7 @@ limitations under the License. WebGear_RTC is implemented with the help of [**aiortc**](https://aiortc.readthedocs.io/en/latest/) library which is built on top of asynchronous I/O framework for Web Real-Time Communication (WebRTC) and Object Real-Time Communication (ORTC) and supports many features like SDP generation/parsing, Interactive Connectivity Establishment with half-trickle and mDNS support, DTLS key and certificate generation, DTLS handshake, etc. -WebGear_RTC can handle [multiple consumers](../../webgear_rtc/advanced/#using-webgear_rtc-as-real-time-broadcaster) seamlessly and provides native support for ICE _(Interactive Connectivity Establishment)_ protocol, STUN _(Session Traversal Utilities for NAT)_, and TURN _(Traversal Using Relays around NAT)_ servers that help us to easily establish direct media connection with the remote peers for uninterrupted data flow. It also allows us to define our custom Server as a source to manipulate frames easily before sending them across the network(see this [doc](../../webgear_rtc/advanced/#using-webgear_rtc-with-a-custom-sourceopencv) example). +WebGear_RTC can handle [multiple consumers](../../webgear_rtc/advanced/#using-webgear_rtc-as-real-time-broadcaster) seamlessly and provides native support for ICE _(Interactive Connectivity Establishment)_ protocol, STUN _(Session Traversal Utilities for NAT)_, and TURN _(Traversal Using Relays around NAT)_ servers that help us to easily establish direct media connection with the remote peers for uninterrupted data flow. It also allows us to define our custom Server as a source to transform frames easily before sending them across the network(see this [doc](../../webgear_rtc/advanced/#using-webgear_rtc-with-a-custom-sourceopencv) example). WebGear_RTC API works in conjunction with [**Starlette**](https://www.starlette.io/) ASGI application and can also flexibly interact with Starlette's ecosystem of shared middleware, mountable applications, [Response classes](https://www.starlette.io/responses/), [Routing tables](https://www.starlette.io/routing/), [Static Files](https://www.starlette.io/staticfiles/), [Templating engine(with Jinja2)](https://www.starlette.io/templates/), etc. diff --git a/docs/gears/writegear/compression/usage.md b/docs/gears/writegear/compression/usage.md index 97d1bd259..bc3b0c0ea 100644 --- a/docs/gears/writegear/compression/usage.md +++ b/docs/gears/writegear/compression/usage.md @@ -221,7 +221,7 @@ In Compression Mode, WriteGear also allows URL strings _(as output)_ for network In this example, we will stream live camera feed directly to Twitch: -!!! info "YouTube-Live Streaming example code also available in [WriteGear FAQs ➶](../../../../help/writegear_faqs/#is-youtube-live-streaming-possibe-with-writegear)" +!!! info "YouTube-Live Streaming example code also available in [WriteGear FAQs ➶](../../../../help/writegear_ex/#using-writegears-compression-mode-for-youtube-live-streaming)" !!! warning "This example assume you already have a [**Twitch Account**](https://www.twitch.tv/) for publishing video." @@ -576,7 +576,7 @@ In this example code, we will merging the audio from a Audio Device _(for e.g. W !!! fail "If audio still doesn't work then reach us out on [Gitter ➶](https://gitter.im/vidgear/community) Community channel" -!!! danger "Make sure this `-audio` audio-source it compatible with provided video-source, otherwise you encounter multiple errors or no output at all." +!!! danger "Make sure this `-i` audio-source it compatible with provided video-source, otherwise you encounter multiple errors or no output at all." !!! warning "You **MUST** use [`-input_framerate`](../../params/#a-exclusive-parameters) attribute to set exact value of input framerate when using external audio in Real-time Frames mode, otherwise audio delay will occur in output streams." diff --git a/docs/gears/writegear/introduction.md b/docs/gears/writegear/introduction.md index 5deb1010e..0149e6376 100644 --- a/docs/gears/writegear/introduction.md +++ b/docs/gears/writegear/introduction.md @@ -45,7 +45,7 @@ WriteGear primarily operates in following modes: * [**Compression Mode**](../compression/overview/): In this mode, WriteGear utilizes powerful **FFmpeg** inbuilt encoders to encode lossless multimedia files. This mode provides us the ability to exploit almost any parameter available within FFmpeg, effortlessly and flexibly, and while doing that it robustly handles all errors/warnings quietly. -* [**Non-Compression Mode**](../non_compression/overview/): In this mode, WriteGear utilizes basic **OpenCV's inbuilt VideoWriter API** tools. This mode also supports all parameters manipulation available within VideoWriter API, but it lacks the ability to manipulate encoding parameters and other important features like video compression, audio encoding, etc. +* [**Non-Compression Mode**](../non_compression/overview/): In this mode, WriteGear utilizes basic **OpenCV's inbuilt VideoWriter API** tools. This mode also supports all parameter transformations available within OpenCV's VideoWriter API, but it lacks the ability to manipulate encoding parameters and other important features like video compression, audio encoding, etc.   @@ -75,4 +75,10 @@ from vidgear.gears import WriteGear See here 🚀 -  \ No newline at end of file +  + +## Bonus Examples + +!!! example "Checkout more advanced WriteGear examples with unusual configuration [here ➶](../../../help/writegear_ex/)" + +  \ No newline at end of file diff --git a/docs/help/camgear_ex.md b/docs/help/camgear_ex.md new file mode 100644 index 000000000..5a0522d3a --- /dev/null +++ b/docs/help/camgear_ex.md @@ -0,0 +1,243 @@ + + +# CamGear Examples + +  + +## Synchronizing Two Sources in CamGear + +In this example both streams and corresponding frames will be processed synchronously i.e. with no delay: + +!!! danger "Using same source with more than one instances of CamGear can lead to [Global Interpreter Lock (GIL)](https://wiki.python.org/moin/GlobalInterpreterLock#:~:text=In%20CPython%2C%20the%20global%20interpreter,conditions%20and%20ensures%20thread%20safety.&text=The%20GIL%20can%20degrade%20performance%20even%20when%20it%20is%20not%20a%20bottleneck.) that degrades performance even when it is not a bottleneck." + +```python +# import required libraries +from vidgear.gears import CamGear +import cv2 +import time + +# define and start the stream on first source ( For e.g #0 index device) +stream1 = CamGear(source=0, logging=True).start() + +# define and start the stream on second source ( For e.g #1 index device) +stream2 = CamGear(source=1, logging=True).start() + +# infinite loop +while True: + + frameA = stream1.read() + # read frames from stream1 + + frameB = stream2.read() + # read frames from stream2 + + # check if any of two frame is None + if frameA is None or frameB is None: + #if True break the infinite loop + break + + # do something with both frameA and frameB here + cv2.imshow("Output Frame1", frameA) + cv2.imshow("Output Frame2", frameB) + # Show output window of stream1 and stream 2 separately + + key = cv2.waitKey(1) & 0xFF + # check for 'q' key-press + if key == ord("q"): + #if 'q' key-pressed break out + break + + if key == ord("w"): + #if 'w' key-pressed save both frameA and frameB at same time + cv2.imwrite("Image-1.jpg", frameA) + cv2.imwrite("Image-2.jpg", frameB) + #break #uncomment this line to break out after taking images + +cv2.destroyAllWindows() +# close output window + +# safely close both video streams +stream1.stop() +stream2.stop() +``` + +  + +## Using variable Youtube-DL parameters in CamGear + +CamGear provides exclusive attributes `STREAM_RESOLUTION` _(for specifying stream resolution)_ & `STREAM_PARAMS` _(for specifying underlying API(e.g. `youtube-dl`) parameters)_ with its [`options`](../../gears/camgear/params/#options) dictionary parameter. + +The complete usage example is as follows: + +!!! tip "More information on `STREAM_RESOLUTION` & `STREAM_PARAMS` attributes can be found [here ➶](../../gears/camgear/advanced/source_params/#exclusive-camgear-parameters)" + +```python +# import required libraries +from vidgear.gears import CamGear +import cv2 + +# specify attributes +options = {"STREAM_RESOLUTION": "720p", "STREAM_PARAMS": {"nocheckcertificate": True}} + +# Add YouTube Video URL as input source (for e.g https://youtu.be/bvetuLwJIkA) +# and enable Stream Mode (`stream_mode = True`) +stream = CamGear( + source="https://youtu.be/bvetuLwJIkA", stream_mode=True, logging=True, **options +).start() + +# loop over +while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # Show output window + cv2.imshow("Output", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + +# close output window +cv2.destroyAllWindows() + +# safely close video stream +stream.stop() +``` + + +  + + +## Using CamGear for capturing RSTP/RTMP URLs + +You can open any network stream _(such as RTSP/RTMP)_ just by providing its URL directly to CamGear's [`source`](../../gears/camgear/params/#source) parameter. + +Here's a high-level wrapper code around CamGear API to enable auto-reconnection during capturing: + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +??? tip "Enforcing UDP stream" + + You can easily enforce UDP for RSTP streams inplace of default TCP, by putting following lines of code on the top of your existing code: + + ```python + # import required libraries + import os + + # enforce UDP + os.environ["OPENCV_FFMPEG_CAPTURE_OPTIONS"] = "rtsp_transport;udp" + ``` + + Finally, use [`backend`](../../gears/camgear/params/#backend) parameter value as `backend=cv2.CAP_FFMPEG` in CamGear. + + +```python +from vidgear.gears import CamGear +import cv2 +import datetime +import time + + +class Reconnecting_CamGear: + def __init__(self, cam_address, reset_attempts=50, reset_delay=5): + self.cam_address = cam_address + self.reset_attempts = reset_attempts + self.reset_delay = reset_delay + self.source = CamGear(source=self.cam_address).start() + self.running = True + + def read(self): + if self.source is None: + return None + if self.running and self.reset_attempts > 0: + frame = self.source.read() + if frame is None: + self.source.stop() + self.reset_attempts -= 1 + print( + "Re-connection Attempt-{} occured at time:{}".format( + str(self.reset_attempts), + datetime.datetime.now().strftime("%m-%d-%Y %I:%M:%S%p"), + ) + ) + time.sleep(self.reset_delay) + self.source = CamGear(source=self.cam_address).start() + # return previous frame + return self.frame + else: + self.frame = frame + return frame + else: + return None + + def stop(self): + self.running = False + self.reset_attempts = 0 + self.frame = None + if not self.source is None: + self.source.stop() + + +if __name__ == "__main__": + # open any valid video stream + stream = Reconnecting_CamGear( + cam_address="rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov", + reset_attempts=20, + reset_delay=5, + ) + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if None-type + if frame is None: + break + + # {do something with the frame here} + + # Show output window + cv2.imshow("Output", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() +``` + +  \ No newline at end of file diff --git a/docs/help/camgear_faqs.md b/docs/help/camgear_faqs.md index a55cdc848..4aea0d91b 100644 --- a/docs/help/camgear_faqs.md +++ b/docs/help/camgear_faqs.md @@ -72,50 +72,7 @@ limitations under the License. ## How to change quality and parameters of YouTube Streams with CamGear? -CamGear provides exclusive attributes `STREAM_RESOLUTION` _(for specifying stream resolution)_ & `STREAM_PARAMS` _(for specifying underlying API(e.g. `youtube-dl`) parameters)_ with its [`options`](../../gears/camgear/params/#options) dictionary parameter. The complete usage example is as follows: - -!!! tip "More information on `STREAM_RESOLUTION` & `STREAM_PARAMS` attributes can be found [here ➶](../../gears/camgear/advanced/source_params/#exclusive-camgear-parameters)" - -```python -# import required libraries -from vidgear.gears import CamGear -import cv2 - -# specify attributes -options = {"STREAM_RESOLUTION": "720p", "STREAM_PARAMS": {"nocheckcertificate": True}} - -# Add YouTube Video URL as input source (for e.g https://youtu.be/bvetuLwJIkA) -# and enable Stream Mode (`stream_mode = True`) -stream = CamGear( - source="https://youtu.be/bvetuLwJIkA", stream_mode=True, logging=True, **options -).start() - -# loop over -while True: - - # read frames from stream - frame = stream.read() - - # check for frame if Nonetype - if frame is None: - break - - # {do something with the frame here} - - # Show output window - cv2.imshow("Output", frame) - - # check for 'q' key if pressed - key = cv2.waitKey(1) & 0xFF - if key == ord("q"): - break - -# close output window -cv2.destroyAllWindows() - -# safely close video stream -stream.stop() -``` +**Answer:** CamGear provides exclusive attributes `STREAM_RESOLUTION` _(for specifying stream resolution)_ & `STREAM_PARAMS` _(for specifying underlying API(e.g. `youtube-dl`) parameters)_ with its [`options`](../../gears/camgear/params/#options) dictionary parameter. See [this bonus example ➶](../camgear_ex/#using-variable-youtube-dl-parameters-in-camgear).   @@ -123,57 +80,7 @@ stream.stop() ## How to open RSTP network streams with CamGear? -You can open any local network stream _(such as RTSP)_ just by providing its URL directly to CamGear's [`source`](../../gears/camgear/params/#source) parameter. The complete usage example is as follows: - -??? tip "Enforcing UDP stream" - - You can easily enforce UDP for RSTP streams inplace of default TCP, by putting following lines of code on the top of your existing code: - - ```python - # import required libraries - import os - - # enforce UDP - os.environ["OPENCV_FFMPEG_CAPTURE_OPTIONS"] = "rtsp_transport;udp" - ``` - - Finally, use [`backend`](../../gears/camgear/params/#backend) parameter value as `backend=cv2.CAP_FFMPEG` in CamGear. - - -```python -# import required libraries -from vidgear.gears import CamGear -import cv2 - -# open valid network video-stream -stream = CamGear(source="rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov").start() - -# loop over -while True: - - # read frames from stream - frame = stream.read() - - # check for frame if Nonetype - if frame is None: - break - - # {do something with the frame here} - - # Show output window - cv2.imshow("Output", frame) - - # check for 'q' key if pressed - key = cv2.waitKey(1) & 0xFF - if key == ord("q"): - break - -# close output window -cv2.destroyAllWindows() - -# safely close video stream -stream.stop() -``` +**Answer:** You can open any local network stream _(such as RTSP)_ just by providing its URL directly to CamGear's [`source`](../../gears/camgear/params/#source) parameter. See [this bonus example ➶](../camgear_ex/#using-camgear-for-capturing-rstprtmp-urls).   @@ -191,7 +98,7 @@ stream.stop() ## How to synchronize between two cameras? -**Answer:** See [this issue comment ➶](https://github.com/abhiTronix/vidgear/issues/1#issuecomment-473943037). +**Answer:** See [this bonus example ➶](../camgear_ex/#synchronizing-two-sources-in-camgear).   diff --git a/docs/help/get_help.md b/docs/help/get_help.md index 619e2d56a..b01b34706 100644 --- a/docs/help/get_help.md +++ b/docs/help/get_help.md @@ -37,7 +37,7 @@ There are several ways to get help with VidGear: > Got a question related to VidGear Working? -Checkout our Frequently Asked Questions, a curated list of all the questions with adequate answer that we commonly receive, for quickly troubleshooting your problems: +Checkout the Frequently Asked Questions, a curated list of all the questions with adequate answer that we commonly receive, for quickly troubleshooting your problems: - [General FAQs ➶](general_faqs.md) - [CamGear FAQs ➶](camgear_faqs.md) @@ -56,6 +56,26 @@ Checkout our Frequently Asked Questions, a curated list of all the questions wit   +## Bonus Examples + +> How we do this with that API? + +Checkout the Bonus Examples, a curated list of all advanced examples with unusual configuration, which isn't available in Vidgear API's usage examples: + +- [CamGear FAQs ➶](camgear_ex.md) +- [PiGear FAQs ➶](pigear_ex.md) +- [ScreenGear FAQs ➶](screengear_ex.md) +- [StreamGear FAQs ➶](streamgear_ex.md) +- [WriteGear FAQs ➶](writegear_ex.md) +- [NetGear FAQs ➶](netgear_ex.md) +- [WebGear FAQs ➶](webgear_ex.md) +- [WebGear_RTC FAQs ➶](webgear_rtc_ex.md) +- [VideoGear FAQs ➶](videogear_ex.md) +- [Stabilizer Class FAQs ➶](stabilizer_ex.md) +- [NetGear_Async FAQs ➶](netgear_async_ex.md) + +  + ## Join our Gitter Community channel > Have you come up with some new idea 💡 or looking for the fastest way troubleshoot your problems diff --git a/docs/help/netgear_async_ex.md b/docs/help/netgear_async_ex.md new file mode 100644 index 000000000..a46c17c7e --- /dev/null +++ b/docs/help/netgear_async_ex.md @@ -0,0 +1,169 @@ + + +# NetGear_Async Examples + +  + +## Using NetGear_Async with WebGear + +The complete usage example is as follows: + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +### Client + WebGear Server + +Open a terminal on Client System where you want to display the input frames _(and setup WebGear server)_ received from the Server and execute the following python code: + +!!! danger "After running this code, Make sure to open Browser immediately otherwise NetGear_Async will soon exit with `TimeoutError`. You can also try setting [`timeout`](../../gears/netgear_async/params/#timeout) parameter to a higher value to extend this timeout." + +!!! warning "Make sure you use different `port` value for NetGear_Async and WebGear API." + +!!! alert "High CPU utilization may occur on Client's end. User discretion is advised." + +!!! note "Note down the IP-address of this system _(required at Server's end)_ by executing the `hostname -I` command and also replace it in the following code."" + +```python +# import libraries +from vidgear.gears.asyncio import NetGear_Async +from vidgear.gears.asyncio import WebGear +from vidgear.gears.asyncio.helper import reducer +import uvicorn, asyncio, cv2 + +# Define NetGear_Async Client at given IP address and define parameters +# !!! change following IP address '192.168.x.xxx' with yours !!! +client = NetGear_Async( + receive_mode=True, + pattern=1, + logging=True, +).launch() + +# create your own custom frame producer +async def my_frame_producer(): + + # loop over Client's Asynchronous Frame Generator + async for frame in client.recv_generator(): + + # {do something with received frames here} + + # reducer frames size if you want more performance otherwise comment this line + frame = await reducer( + frame, percentage=30, interpolation=cv2.INTER_AREA + ) # reduce frame by 30% + + # handle JPEG encoding + encodedImage = cv2.imencode(".jpg", frame)[1].tobytes() + # yield frame in byte format + yield (b"--frame\r\nContent-Type:image/jpeg\r\n\r\n" + encodedImage + b"\r\n") + await asyncio.sleep(0) + + +if __name__ == "__main__": + # Set event loop to client's + asyncio.set_event_loop(client.loop) + + # initialize WebGear app without any source + web = WebGear(logging=True) + + # add your custom frame producer to config with adequate IP address + web.config["generator"] = my_frame_producer + + # run this app on Uvicorn server at address http://localhost:8000/ + uvicorn.run(web(), host="localhost", port=8000) + + # safely close client + client.close() + + # close app safely + web.shutdown() +``` + +!!! success "On successfully running this code, the output stream will be displayed at address http://localhost:8000/ in your Client's Browser." + +### Server + +Now, Open the terminal on another Server System _(with a webcam connected to it at index 0)_, and execute the following python code: + +!!! note "Replace the IP address in the following code with Client's IP address you noted earlier." + +```python +# import library +from vidgear.gears.asyncio import NetGear_Async +import cv2, asyncio + +# initialize Server without any source +server = NetGear_Async( + source=None, + address="192.168.x.xxx", + port="5454", + protocol="tcp", + pattern=1, + logging=True, +) + +# Create a async frame generator as custom source +async def my_frame_generator(): + + # !!! define your own video source here !!! + # Open any video stream such as live webcam + # video stream on first index(i.e. 0) device + stream = cv2.VideoCapture(0) + + # loop over stream until its terminated + while True: + + # read frames + (grabbed, frame) = stream.read() + + # check if frame empty + if not grabbed: + break + + # do something with the frame to be sent here + + # yield frame + yield frame + # sleep for sometime + await asyncio.sleep(0) + + # close stream + stream.release() + + +if __name__ == "__main__": + # set event loop + asyncio.set_event_loop(server.loop) + # Add your custom source generator to Server configuration + server.config["generator"] = my_frame_generator() + # Launch the Server + server.launch() + try: + # run your main function task until it is complete + server.loop.run_until_complete(server.task) + except (KeyboardInterrupt, SystemExit): + # wait for interrupts + pass + finally: + # finally close the server + server.close() +``` + +  diff --git a/docs/help/netgear_ex.md b/docs/help/netgear_ex.md new file mode 100644 index 000000000..ef43baaa8 --- /dev/null +++ b/docs/help/netgear_ex.md @@ -0,0 +1,368 @@ + + +# NetGear Examples + +  + +## Using NetGear with WebGear + +The complete usage example is as follows: + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +### Client + WebGear Server + +Open a terminal on Client System where you want to display the input frames _(and setup WebGear server)_ received from the Server and execute the following python code: + +!!! danger "After running this code, Make sure to open Browser immediately otherwise NetGear will soon exit with `RuntimeError`. You can also try setting [`max_retries`](../../gears/netgear/params/#options) and [`request_timeout`](../../gears/netgear/params/#options) like attributes to a higher value to avoid this." + +!!! warning "Make sure you use different `port` value for NetGear and WebGear API." + +!!! alert "High CPU utilization may occur on Client's end. User discretion is advised." + +!!! note "Note down the IP-address of this system _(required at Server's end)_ by executing the `hostname -I` command and also replace it in the following code."" + +```python +# import necessary libs +import uvicorn, asyncio, cv2 +from vidgear.gears.asyncio import WebGear +from vidgear.gears.asyncio.helper import reducer + +# initialize WebGear app without any source +web = WebGear(logging=True) + + +# activate jpeg encoding and specify other related parameters +options = { + "jpeg_compression": True, + "jpeg_compression_quality": 90, + "jpeg_compression_fastdct": True, + "jpeg_compression_fastupsample": True, +} + +# create your own custom frame producer +async def my_frame_producer(): + # initialize global params + # Define NetGear Client at given IP address and define parameters + # !!! change following IP address '192.168.x.xxx' with yours !!! + client = NetGear( + receive_mode=True, + address="192.168.x.xxx", + port="5454", + protocol="tcp", + pattern=1, + logging=True, + **options, + ) + + # loop over frames + while True: + # receive frames from network + frame = self.client.recv() + + # if NoneType + if frame is None: + return None + + # do something with your OpenCV frame here + + # reducer frames size if you want more performance otherwise comment this line + frame = await reducer( + frame, percentage=30, interpolation=cv2.INTER_AREA + ) # reduce frame by 30% + + # handle JPEG encoding + encodedImage = cv2.imencode(".jpg", frame)[1].tobytes() + # yield frame in byte format + yield (b"--frame\r\nContent-Type:image/jpeg\r\n\r\n" + encodedImage + b"\r\n") + await asyncio.sleep(0) + # close stream + client.close() + + +# add your custom frame producer to config with adequate IP address +web.config["generator"] = my_frame_producer + +# run this app on Uvicorn server at address http://localhost:8000/ +uvicorn.run(web(), host="localhost", port=8000) + +# close app safely +web.shutdown() +``` + +!!! success "On successfully running this code, the output stream will be displayed at address http://localhost:8000/ in your Client's Browser." + + +### Server + +Now, Open the terminal on another Server System _(with a webcam connected to it at index 0)_, and execute the following python code: + +!!! note "Replace the IP address in the following code with Client's IP address you noted earlier." + +```python +# import required libraries +from vidgear.gears import VideoGear +from vidgear.gears import NetGear +import cv2 + +# activate jpeg encoding and specify other related parameters +options = { + "jpeg_compression": True, + "jpeg_compression_quality": 90, + "jpeg_compression_fastdct": True, + "jpeg_compression_fastupsample": True, +} + +# Open live video stream on webcam at first index(i.e. 0) device +stream = VideoGear(source=0).start() + +# Define NetGear server at given IP address and define parameters +# !!! change following IP address '192.168.x.xxx' with client's IP address !!! +server = NetGear( + address="192.168.x.xxx", + port="5454", + protocol="tcp", + pattern=1, + logging=True, + **options +) + +# loop over until KeyBoard Interrupted +while True: + + try: + # read frames from stream + frame = stream.read() + + # check for frame if None-type + if frame is None: + break + + # {do something with the frame here} + + # send frame to server + server.send(frame) + + except KeyboardInterrupt: + break + +# safely close video stream +stream.stop() + +# safely close server +server.close() +``` + +  + +## Using NetGear with WebGear_RTC + +The complete usage example is as follows: + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +### Client + WebGear_RTC Server + +Open a terminal on Client System where you want to display the input frames _(and setup WebGear_RTC server)_ received from the Server and execute the following python code: + +!!! danger "After running this code, Make sure to open Browser immediately otherwise NetGear will soon exit with `RuntimeError`. You can also try setting [`max_retries`](../../gears/netgear/params/#options) and [`request_timeout`](../../gears/netgear/params/#options) like attributes to a higher value to avoid this." + +!!! warning "Make sure you use different `port` value for NetGear and WebGear_RTC API." + +!!! alert "High CPU utilization may occur on Client's end. User discretion is advised." + +!!! note "Note down the IP-address of this system _required at Server's end)_ by executing the `hostname -I` command and also replace it in the following code."" + +```python +# import required libraries +import uvicorn, asyncio, cv2 +from av import VideoFrame +from aiortc import VideoStreamTrack +from aiortc.mediastreams import MediaStreamError +from vidgear.gears import NetGear +from vidgear.gears.asyncio import WebGear_RTC +from vidgear.gears.asyncio.helper import reducer + +# initialize WebGear_RTC app without any source +web = WebGear_RTC(logging=True) + +# activate jpeg encoding and specify other related parameters +options = { + "jpeg_compression": True, + "jpeg_compression_quality": 90, + "jpeg_compression_fastdct": True, + "jpeg_compression_fastupsample": True, +} + + +# create your own Bare-Minimum Custom Media Server +class Custom_RTCServer(VideoStreamTrack): + """ + Custom Media Server using OpenCV, an inherit-class + to aiortc's VideoStreamTrack. + """ + + def __init__( + self, + address=None, + port="5454", + protocol="tcp", + pattern=1, + logging=True, + options={}, + ): + # don't forget this line! + super().__init__() + + # initialize global params + # Define NetGear Client at given IP address and define parameters + self.client = NetGear( + receive_mode=True, + address=address, + port=protocol, + pattern=pattern, + receive_mode=True, + logging=logging, + **options + ) + + async def recv(self): + """ + A coroutine function that yields `av.frame.Frame`. + """ + # don't forget this function!!! + + # get next timestamp + pts, time_base = await self.next_timestamp() + + # receive frames from network + frame = self.client.recv() + + # if NoneType + if frame is None: + raise MediaStreamError + + # reducer frames size if you want more performance otherwise comment this line + frame = await reducer(frame, percentage=30) # reduce frame by 30% + + # contruct `av.frame.Frame` from `numpy.nd.array` + av_frame = VideoFrame.from_ndarray(frame, format="bgr24") + av_frame.pts = pts + av_frame.time_base = time_base + + # return `av.frame.Frame` + return av_frame + + def terminate(self): + """ + Gracefully terminates VideoGear stream + """ + # don't forget this function!!! + + # terminate + if not (self.client is None): + self.client.close() + self.client = None + + +# assign your custom media server to config with adequate IP address +# !!! change following IP address '192.168.x.xxx' with yours !!! +web.config["server"] = Custom_RTCServer( + address="192.168.x.xxx", + port="5454", + protocol="tcp", + pattern=1, + logging=True, + **options +) + +# run this app on Uvicorn server at address http://localhost:8000/ +uvicorn.run(web(), host="localhost", port=8000) + +# close app safely +web.shutdown() +``` + +!!! success "On successfully running this code, the output stream will be displayed at address http://localhost:8000/ in your Client's Browser." + +### Server + +Now, Open the terminal on another Server System _(with a webcam connected to it at index 0)_, and execute the following python code: + +!!! note "Replace the IP address in the following code with Client's IP address you noted earlier." + +```python +# import required libraries +from vidgear.gears import VideoGear +from vidgear.gears import NetGear +import cv2 + +# activate jpeg encoding and specify other related parameters +options = { + "jpeg_compression": True, + "jpeg_compression_quality": 90, + "jpeg_compression_fastdct": True, + "jpeg_compression_fastupsample": True, +} + +# Open live video stream on webcam at first index(i.e. 0) device +stream = VideoGear(source=0).start() + +# Define NetGear server at given IP address and define parameters +# !!! change following IP address '192.168.x.xxx' with client's IP address !!! +server = NetGear( + address="192.168.x.xxx", + port="5454", + protocol="tcp", + pattern=1, + logging=True, + **options +) + +# loop over until KeyBoard Interrupted +while True: + + try: + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # send frame to server + server.send(frame) + + except KeyboardInterrupt: + break + +# safely close video stream +stream.stop() + +# safely close server +server.close() +``` + +  \ No newline at end of file diff --git a/docs/help/pigear_ex.md b/docs/help/pigear_ex.md new file mode 100644 index 000000000..03d86f63e --- /dev/null +++ b/docs/help/pigear_ex.md @@ -0,0 +1,75 @@ + + +# PiGear Examples + +  + +## Setting variable `picamera` parameters for Camera Module at runtime + +You can use `stream` global parameter in PiGear to feed any [`picamera`](https://picamera.readthedocs.io/en/release-1.10/api_camera.html) parameters at runtime. + +In this example we will set initial Camera Module's `brightness` value `80`, and will change it `50` when **`z` key** is pressed at runtime: + +```python +# import required libraries +from vidgear.gears import PiGear +import cv2 + +# initial parameters +options = {"brightness": 80} # set brightness to 80 + +# open pi video stream with default parameters +stream = PiGear(logging=True, **options).start() + +# loop over +while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + + # {do something with the frame here} + + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + # check for 'z' key if pressed + if key == ord("z"): + # change brightness to 50 + stream.stream.brightness = 50 + +# close output window +cv2.destroyAllWindows() + +# safely close video stream +stream.stop() +``` + +  \ No newline at end of file diff --git a/docs/help/pigear_faqs.md b/docs/help/pigear_faqs.md index 41725348e..3c24814da 100644 --- a/docs/help/pigear_faqs.md +++ b/docs/help/pigear_faqs.md @@ -67,53 +67,6 @@ limitations under the License. ## How to change `picamera` settings for Camera Module at runtime? -**Answer:** You can use `stream` global parameter in PiGear to feed any `picamera` setting at runtime. See following sample usage example: - -!!! info "" - In this example we will set initial Camera Module's `brightness` value `80`, and will change it `50` when **`z` key** is pressed at runtime. - -```python -# import required libraries -from vidgear.gears import PiGear -import cv2 - -# initial parameters -options = {"brightness": 80} # set brightness to 80 - -# open pi video stream with default parameters -stream = PiGear(logging=True, **options).start() - -# loop over -while True: - - # read frames from stream - frame = stream.read() - - # check for frame if Nonetype - if frame is None: - break - - - # {do something with the frame here} - - - # Show output window - cv2.imshow("Output Frame", frame) - - # check for 'q' key if pressed - key = cv2.waitKey(1) & 0xFF - if key == ord("q"): - break - # check for 'z' key if pressed - if key == ord("z"): - # change brightness to 50 - stream.stream.brightness = 50 - -# close output window -cv2.destroyAllWindows() - -# safely close video stream -stream.stop() -``` +**Answer:** You can use `stream` global parameter in PiGear to feed any `picamera` setting at runtime. See [this bonus example ➶](../pigear_ex/#setting-variable-picamera-parameters-for-camera-module-at-runtime)   \ No newline at end of file diff --git a/docs/help/screengear_ex.md b/docs/help/screengear_ex.md new file mode 100644 index 000000000..80463ee11 --- /dev/null +++ b/docs/help/screengear_ex.md @@ -0,0 +1,149 @@ + + +# ScreenGear Examples + +  + +## Using ScreenGear with NetGear and WriteGear + +The complete usage example is as follows: + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +### Client + WriteGear + +Open a terminal on Client System _(where you want to save the input frames received from the Server)_ and execute the following python code: + +!!! info "Note down the IP-address of this system(required at Server's end) by executing the command: `hostname -I` and also replace it in the following code." + +!!! tip "You can terminate client anytime by pressing ++ctrl+"C"++ on your keyboard!" + +```python +# import required libraries +from vidgear.gears import NetGear +from vidgear.gears import WriteGear +import cv2 + +# define various tweak flags +options = {"flag": 0, "copy": False, "track": False} + +# Define Netgear Client at given IP address and define parameters +# !!! change following IP address '192.168.x.xxx' with yours !!! +client = NetGear( + address="192.168.x.xxx", + port="5454", + protocol="tcp", + pattern=1, + receive_mode=True, + logging=True, + **options +) + +# Define writer with default parameters and suitable output filename for e.g. `Output.mp4` +writer = WriteGear(output_filename="Output.mp4") + +# loop over +while True: + + # receive frames from network + frame = client.recv() + + # check for received frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # write frame to writer + writer.write(frame) + +# close output window +cv2.destroyAllWindows() + +# safely close client +client.close() + +# safely close writer +writer.close() +``` + +### Server + ScreenGear + +Now, Open the terminal on another Server System _(with a montior/display attached to it)_, and execute the following python code: + +!!! info "Replace the IP address in the following code with Client's IP address you noted earlier." + +!!! tip "You can terminate stream on both side anytime by pressing ++ctrl+"C"++ on your keyboard!" + +```python +# import required libraries +from vidgear.gears import VideoGear +from vidgear.gears import NetGear + +# define dimensions of screen w.r.t to given monitor to be captured +options = {"top": 40, "left": 0, "width": 100, "height": 100} + +# open stream with defined parameters +stream = ScreenGear(logging=True, **options).start() + +# define various netgear tweak flags +options = {"flag": 0, "copy": False, "track": False} + +# Define Netgear server at given IP address and define parameters +# !!! change following IP address '192.168.x.xxx' with client's IP address !!! +server = NetGear( + address="192.168.x.xxx", + port="5454", + protocol="tcp", + pattern=1, + logging=True, + **options +) + +# loop over until KeyBoard Interrupted +while True: + + try: + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # send frame to server + server.send(frame) + + except KeyboardInterrupt: + break + +# safely close video stream +stream.stop() + +# safely close server +server.close() +``` + +  + diff --git a/docs/help/stabilizer_ex.md b/docs/help/stabilizer_ex.md new file mode 100644 index 000000000..8b8636265 --- /dev/null +++ b/docs/help/stabilizer_ex.md @@ -0,0 +1,236 @@ + + +# Stabilizer Class Examples + +  + +## Saving Stabilizer Class output with Live Audio Input + +In this example code, we will merging the audio from a Audio Device _(for e.g. Webcam inbuilt mic input)_ with Stablized frames incoming from the Stabilizer Class _(which is also using same Webcam video input through OpenCV)_, and save the final output as a compressed video file, all in real time: + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +!!! alert "Example Assumptions" + + * You're running are Linux machine. + * You already have appropriate audio driver and software installed on your machine. + + +??? tip "Identifying and Specifying sound card on different OS platforms" + + === "On Windows" + + Windows OS users can use the [dshow](https://trac.ffmpeg.org/wiki/DirectShow) (DirectShow) to list audio input device which is the preferred option for Windows users. You can refer following steps to identify and specify your sound card: + + - [x] **[OPTIONAL] Enable sound card(if disabled):** First enable your Stereo Mix by opening the "Sound" window and select the "Recording" tab, then right click on the window and select "Show Disabled Devices" to toggle the Stereo Mix device visibility. **Follow this [post ➶](https://forums.tomshardware.com/threads/no-sound-through-stereo-mix-realtek-hd-audio.1716182/) for more details.** + + - [x] **Identify Sound Card:** Then, You can locate your soundcard using `dshow` as follows: + + ```sh + c:\> ffmpeg -list_devices true -f dshow -i dummy + ffmpeg version N-45279-g6b86dd5... --enable-runtime-cpudetect + libavutil 51. 74.100 / 51. 74.100 + libavcodec 54. 65.100 / 54. 65.100 + libavformat 54. 31.100 / 54. 31.100 + libavdevice 54. 3.100 / 54. 3.100 + libavfilter 3. 19.102 / 3. 19.102 + libswscale 2. 1.101 / 2. 1.101 + libswresample 0. 16.100 / 0. 16.100 + [dshow @ 03ACF580] DirectShow video devices + [dshow @ 03ACF580] "Integrated Camera" + [dshow @ 03ACF580] "USB2.0 Camera" + [dshow @ 03ACF580] DirectShow audio devices + [dshow @ 03ACF580] "Microphone (Realtek High Definition Audio)" + [dshow @ 03ACF580] "Microphone (USB2.0 Camera)" + dummy: Immediate exit requested + ``` + + + - [x] **Specify Sound Card:** Then, you can specify your located soundcard in StreamGear as follows: + + ```python + # assign appropriate input audio-source + output_params = { + "-i":"audio=Microphone (USB2.0 Camera)", + "-thread_queue_size": "512", + "-f": "dshow", + "-ac": "2", + "-acodec": "aac", + "-ar": "44100", + } + ``` + + !!! fail "If audio still doesn't work then [checkout this troubleshooting guide ➶](https://www.maketecheasier.com/fix-microphone-not-working-windows10/) or reach us out on [Gitter ➶](https://gitter.im/vidgear/community) Community channel" + + + === "On Linux" + + Linux OS users can use the [alsa](https://ffmpeg.org/ffmpeg-all.html#alsa) to list input device to capture live audio input such as from a webcam. You can refer following steps to identify and specify your sound card: + + - [x] **Identify Sound Card:** To get the list of all installed cards on your machine, you can type `arecord -l` or `arecord -L` _(longer output)_. + + ```sh + arecord -l + + **** List of CAPTURE Hardware Devices **** + card 0: ICH5 [Intel ICH5], device 0: Intel ICH [Intel ICH5] + Subdevices: 1/1 + Subdevice #0: subdevice #0 + card 0: ICH5 [Intel ICH5], device 1: Intel ICH - MIC ADC [Intel ICH5 - MIC ADC] + Subdevices: 1/1 + Subdevice #0: subdevice #0 + card 0: ICH5 [Intel ICH5], device 2: Intel ICH - MIC2 ADC [Intel ICH5 - MIC2 ADC] + Subdevices: 1/1 + Subdevice #0: subdevice #0 + card 0: ICH5 [Intel ICH5], device 3: Intel ICH - ADC2 [Intel ICH5 - ADC2] + Subdevices: 1/1 + Subdevice #0: subdevice #0 + card 1: U0x46d0x809 [USB Device 0x46d:0x809], device 0: USB Audio [USB Audio] + Subdevices: 1/1 + Subdevice #0: subdevice #0 + ``` + + + - [x] **Specify Sound Card:** Then, you can specify your located soundcard in WriteGear as follows: + + !!! info "The easiest thing to do is to reference sound card directly, namely "card 0" (Intel ICH5) and "card 1" (Microphone on the USB web cam), as `hw:0` or `hw:1`" + + ```python + # assign appropriate input audio-source + output_params = { + "-i": "hw:1", + "-thread_queue_size": "512", + "-f": "alsa", + "-ac": "2", + "-acodec": "aac", + "-ar": "44100", + } + ``` + + !!! fail "If audio still doesn't work then reach us out on [Gitter ➶](https://gitter.im/vidgear/community) Community channel" + + + === "On MacOS" + + MAC OS users can use the [avfoundation](https://ffmpeg.org/ffmpeg-devices.html#avfoundation) to list input devices for grabbing audio from integrated iSight cameras as well as cameras connected via USB or FireWire. You can refer following steps to identify and specify your sound card on MacOS/OSX machines: + + + - [x] **Identify Sound Card:** Then, You can locate your soundcard using `avfoundation` as follows: + + ```sh + ffmpeg -f qtkit -list_devices true -i "" + ffmpeg version N-45279-g6b86dd5... --enable-runtime-cpudetect + libavutil 51. 74.100 / 51. 74.100 + libavcodec 54. 65.100 / 54. 65.100 + libavformat 54. 31.100 / 54. 31.100 + libavdevice 54. 3.100 / 54. 3.100 + libavfilter 3. 19.102 / 3. 19.102 + libswscale 2. 1.101 / 2. 1.101 + libswresample 0. 16.100 / 0. 16.100 + [AVFoundation input device @ 0x7f8e2540ef20] AVFoundation video devices: + [AVFoundation input device @ 0x7f8e2540ef20] [0] FaceTime HD camera (built-in) + [AVFoundation input device @ 0x7f8e2540ef20] [1] Capture screen 0 + [AVFoundation input device @ 0x7f8e2540ef20] AVFoundation audio devices: + [AVFoundation input device @ 0x7f8e2540ef20] [0] Blackmagic Audio + [AVFoundation input device @ 0x7f8e2540ef20] [1] Built-in Microphone + ``` + + + - [x] **Specify Sound Card:** Then, you can specify your located soundcard in StreamGear as follows: + + ```python + # assign appropriate input audio-source + output_params = { + "-audio_device_index": "0", + "-thread_queue_size": "512", + "-f": "avfoundation", + "-ac": "2", + "-acodec": "aac", + "-ar": "44100", + } + ``` + + !!! fail "If audio still doesn't work then reach us out on [Gitter ➶](https://gitter.im/vidgear/community) Community channel" + + +!!! danger "Make sure this `-i` audio-source it compatible with provided video-source, otherwise you encounter multiple errors or no output at all." + +!!! warning "You **MUST** use [`-input_framerate`](../../gears/writegear/compression/params/#a-exclusive-parameters) attribute to set exact value of input framerate when using external audio in Real-time Frames mode, otherwise audio delay will occur in output streams." + +```python +# import required libraries +from vidgear.gears import WriteGear +from vidgear.gears.stabilizer import Stabilizer +import cv2 + +# Open suitable video stream, such as webcam on first index(i.e. 0) +stream = cv2.VideoCapture(0) + +# initiate stabilizer object with defined parameters +stab = Stabilizer(smoothing_radius=30, crop_n_zoom=True, border_size=5, logging=True) + +# change with your webcam soundcard, plus add additional required FFmpeg parameters for your writer +output_params = { + "-thread_queue_size": "512", + "-f": "alsa", + "-ac": "1", + "-ar": "48000", + "-i": "plughw:CARD=CAMERA,DEV=0", +} + +# Define writer with defined parameters and suitable output filename for e.g. `Output.mp4 +writer = WriteGear(output_filename="Output.mp4", logging=True, **output_params) + +# loop over +while True: + + # read frames from stream + (grabbed, frame) = stream.read() + + # check for frame if not grabbed + if not grabbed: + break + + # send current frame to stabilizer for processing + stabilized_frame = stab.stabilize(frame) + + # wait for stabilizer which still be initializing + if stabilized_frame is None: + continue + + # {do something with the stabilized frame here} + + # write stabilized frame to writer + writer.write(stabilized_frame) + + +# clear stabilizer resources +stab.clean() + +# safely close video stream +stream.release() + +# safely close writer +writer.close() +``` + +  \ No newline at end of file diff --git a/docs/help/streamgear_ex.md b/docs/help/streamgear_ex.md new file mode 100644 index 000000000..d8a83db14 --- /dev/null +++ b/docs/help/streamgear_ex.md @@ -0,0 +1,161 @@ + + +# StreamGear Examples + +  + +## StreamGear Live-Streaming Usage with PiGear + +In this example, we will be Live-Streaming video-frames from Raspberry Pi _(with Camera Module connected)_ using PiGear API and StreamGear API's Real-time Frames Mode: + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +!!! tip "Use `-window_size` & `-extra_window_size` FFmpeg parameters for controlling number of frames to be kept in Chunks. Less these value, less will be latency." + +!!! alert "After every few chunks _(equal to the sum of `-window_size` & `-extra_window_size` values)_, all chunks will be overwritten in Live-Streaming. Thereby, since newer chunks in manifest/playlist will contain NO information of any older ones, and therefore resultant DASH/HLS stream will play only the most recent frames." + +!!! note "In this mode, StreamGear **DOES NOT** automatically maps video-source audio to generated streams. You need to manually assign separate audio-source through [`-audio`](../../gears/streamgear/params/#a-exclusive-parameters) attribute of `stream_params` dictionary parameter." + +=== "DASH" + + ```python + # import required libraries + from vidgear.gears import PiGear + from vidgear.gears import StreamGear + import cv2 + + # add various Picamera tweak parameters to dictionary + options = { + "hflip": True, + "exposure_mode": "auto", + "iso": 800, + "exposure_compensation": 15, + "awb_mode": "horizon", + "sensor_mode": 0, + } + + # open pi video stream with defined parameters + stream = PiGear(resolution=(640, 480), framerate=60, logging=True, **options).start() + + # enable livestreaming and retrieve framerate from CamGear Stream and + # pass it as `-input_framerate` parameter for controlled framerate + stream_params = {"-input_framerate": stream.framerate, "-livestream": True} + + # describe a suitable manifest-file location/name + streamer = StreamGear(output="dash_out.mpd", **stream_params) + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # send frame to streamer + streamer.stream(frame) + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() + + # safely close streamer + streamer.terminate() + ``` + +=== "HLS" + + ```python + # import required libraries + from vidgear.gears import PiGear + from vidgear.gears import StreamGear + import cv2 + + # add various Picamera tweak parameters to dictionary + options = { + "hflip": True, + "exposure_mode": "auto", + "iso": 800, + "exposure_compensation": 15, + "awb_mode": "horizon", + "sensor_mode": 0, + } + + # open pi video stream with defined parameters + stream = PiGear(resolution=(640, 480), framerate=60, logging=True, **options).start() + + # enable livestreaming and retrieve framerate from CamGear Stream and + # pass it as `-input_framerate` parameter for controlled framerate + stream_params = {"-input_framerate": stream.framerate, "-livestream": True} + + # describe a suitable manifest-file location/name + streamer = StreamGear(output="hls_out.m3u8", format = "hls", **stream_params) + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # send frame to streamer + streamer.stream(frame) + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() + + # safely close streamer + streamer.terminate() + ``` + + +  \ No newline at end of file diff --git a/docs/help/videogear_ex.md b/docs/help/videogear_ex.md new file mode 100644 index 000000000..de8a92053 --- /dev/null +++ b/docs/help/videogear_ex.md @@ -0,0 +1,220 @@ + + +# VideoGear Examples + +  + +## Using VideoGear with ROS(Robot Operating System) + +We will be using [`cv_bridge`](http://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython) to convert OpenCV frames to ROS image messages and vice-versa. + +In this example, we'll create a node that convert OpenCV frames into ROS image messages, and then publishes them over ROS. + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +!!! note "This example is vidgear implementation of this [wiki example](http://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython)." + +```python +# import roslib +import roslib + +roslib.load_manifest("my_package") + +# import other required libraries +import sys +import rospy +import cv2 +from std_msgs.msg import String +from sensor_msgs.msg import Image +from cv_bridge import CvBridge, CvBridgeError +from vidgear.gears import VideoGear + +# custom publisher class +class image_publisher: + def __init__(self, source=0, logging=False): + # create CV bridge + self.bridge = CvBridge() + # define publisher topic + self.image_pub = rospy.Publisher("image_topic_pub", Image) + # open stream with given parameters + self.stream_stab = VideoGear(source=source, logging=logging).start() + # define publisher topic + rospy.Subscriber("image_topic_sub", Image, self.callback) + + def callback(self, data): + + # {do something with received ROS node data here} + + # read stabilized frames + frame = self.stream.read() + # check for stabilized frame if None-type + if not (frame is None): + + # {do something with the frame here} + + # publish our frame + try: + self.image_pub.publish(self.bridge.cv2_to_imgmsg(frame, "bgr8")) + except CvBridgeError as e: + # catch any errors + print(e) + + def close(self): + # stop stream + self.stream_stab.stop() + + +def main(args): + # !!! define your own video source here !!! + # Open any video stream such as live webcam + # video stream on first index(i.e. 0) device + + # define publisher + ic = image_publisher(source=0, logging=True) + # initiate ROS node on publisher + rospy.init_node("image_publisher", anonymous=True) + try: + # run node + rospy.spin() + except KeyboardInterrupt: + print("Shutting down") + finally: + # close publisher + ic.close() + + +if __name__ == "__main__": + main(sys.argv) +``` + +  + +## Using VideoGear for capturing RSTP/RTMP URLs + +Here's a high-level wrapper code around VideoGear API to enable auto-reconnection during capturing, plus stabilization is enabled _(`stabilize=True`)_ in order to stabilize captured frames on-the-go: + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +??? tip "Enforcing UDP stream" + + You can easily enforce UDP for RSTP streams inplace of default TCP, by putting following lines of code on the top of your existing code: + + ```python + # import required libraries + import os + + # enforce UDP + os.environ["OPENCV_FFMPEG_CAPTURE_OPTIONS"] = "rtsp_transport;udp" + ``` + + Finally, use [`backend`](../../gears/videogear/params/#backend) parameter value as `backend=cv2.CAP_FFMPEG` in VideoGear. + + +```python +from vidgear.gears import VideoGear +import cv2 +import datetime +import time + + +class Reconnecting_VideoGear: + def __init__(self, cam_address, stabilize=False, reset_attempts=50, reset_delay=5): + self.cam_address = cam_address + self.stabilize = stabilize + self.reset_attempts = reset_attempts + self.reset_delay = reset_delay + self.source = VideoGear( + source=self.cam_address, stabilize=self.stabilize + ).start() + self.running = True + + def read(self): + if self.source is None: + return None + if self.running and self.reset_attempts > 0: + frame = self.source.read() + if frame is None: + self.source.stop() + self.reset_attempts -= 1 + print( + "Re-connection Attempt-{} occured at time:{}".format( + str(self.reset_attempts), + datetime.datetime.now().strftime("%m-%d-%Y %I:%M:%S%p"), + ) + ) + time.sleep(self.reset_delay) + self.source = VideoGear( + source=self.cam_address, stabilize=self.stabilize + ).start() + # return previous frame + return self.frame + else: + self.frame = frame + return frame + else: + return None + + def stop(self): + self.running = False + self.reset_attempts = 0 + self.frame = None + if not self.source is None: + self.source.stop() + + +if __name__ == "__main__": + # open any valid video stream + stream = Reconnecting_VideoGear( + cam_address="rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov", + reset_attempts=20, + reset_delay=5, + ) + + # loop over + while True: + + # read frames from stream + frame = stream.read() + + # check for frame if None-type + if frame is None: + break + + # {do something with the frame here} + + # Show output window + cv2.imshow("Output", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + + # close output window + cv2.destroyAllWindows() + + # safely close video stream + stream.stop() +``` + +  \ No newline at end of file diff --git a/docs/help/webgear_ex.md b/docs/help/webgear_ex.md new file mode 100644 index 000000000..05b1dc628 --- /dev/null +++ b/docs/help/webgear_ex.md @@ -0,0 +1,233 @@ + + +# WebGear Examples + +  + +## Using WebGear with RaspberryPi Camera Module + +Because of WebGear API's flexible internal wapper around VideoGear, it can easily access any parameter of CamGear and PiGear videocapture APIs. + +!!! info "Following usage examples are just an idea of what can be done with WebGear API, you can try various [VideoGear](../../gears/videogear/params/), [CamGear](../../gears/camgear/params/) and [PiGear](../../gears/pigear/params/) parameters directly in WebGear API in the similar manner." + +Here's a bare-minimum example of using WebGear API with the Raspberry Pi camera module while tweaking its various properties in just one-liner: + +```python +# import libs +import uvicorn +from vidgear.gears.asyncio import WebGear + +# various webgear performance and Raspberry Pi camera tweaks +options = { + "frame_size_reduction": 40, + "jpeg_compression_quality": 80, + "jpeg_compression_fastdct": True, + "jpeg_compression_fastupsample": False, + "hflip": True, + "exposure_mode": "auto", + "iso": 800, + "exposure_compensation": 15, + "awb_mode": "horizon", + "sensor_mode": 0, +} + +# initialize WebGear app +web = WebGear( + enablePiCamera=True, resolution=(640, 480), framerate=60, logging=True, **options +) + +# run this app on Uvicorn server at address http://localhost:8000/ +uvicorn.run(web(), host="localhost", port=8000) + +# close app safely +web.shutdown() +``` + +  + +## Using WebGear with real-time Video Stabilization enabled + +Here's an example of using WebGear API with real-time Video Stabilization enabled: + +```python +# import libs +import uvicorn +from vidgear.gears.asyncio import WebGear + +# various webgear performance tweaks +options = { + "frame_size_reduction": 40, + "jpeg_compression_quality": 80, + "jpeg_compression_fastdct": True, + "jpeg_compression_fastupsample": False, +} + +# initialize WebGear app with a raw source and enable video stabilization(`stabilize=True`) +web = WebGear(source="foo.mp4", stabilize=True, logging=True, **options) + +# run this app on Uvicorn server at address http://localhost:8000/ +uvicorn.run(web(), host="localhost", port=8000) + +# close app safely +web.shutdown() +``` + +  + + +## Display Two Sources Simultaneously in WebGear + +In this example, we'll be displaying two video feeds side-by-side simultaneously on browser using WebGear API by defining two separate frame generators: + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +**Step-1 (Trigger Auto-Generation Process):** Firstly, run this bare-minimum code to trigger the [**Auto-generation**](../../gears/webgear/#auto-generation-process) process, this will create `.vidgear` directory at current location _(directory where you'll run this code)_: + +```python +# import required libraries +import uvicorn +from vidgear.gears.asyncio import WebGear + +# provide current directory to save data files +options = {"custom_data_location": "./"} + +# initialize WebGear app +web = WebGear(source=0, logging=True, **options) + +# close app safely +web.shutdown() +``` + +**Step-2 (Replace HTML file):** Now, go inside `.vidgear` :arrow_right: `webgear` :arrow_right: `templates` directory at current location of your machine, and there replace content of `index.html` file with following: + +```html +{% extends "base.html" %} +{% block content %} +

WebGear Video Feed

+
+ Feed + Feed +
+{% endblock %} +``` + +**Step-3 (Build your own Frame Producers):** Now, create a python script code with OpenCV source, as follows: + +```python +# import necessary libs +import uvicorn, asyncio, cv2 +from vidgear.gears.asyncio import WebGear +from vidgear.gears.asyncio.helper import reducer +from starlette.responses import StreamingResponse +from starlette.routing import Route + +# provide current directory to load data files +options = {"custom_data_location": "./"} + +# initialize WebGear app without any source +web = WebGear(logging=True, **options) + +# create your own custom frame producer +async def my_frame_producer1(): + + # !!! define your first video source here !!! + # Open any video stream such as "foo1.mp4" + stream = cv2.VideoCapture("foo1.mp4") + # loop over frames + while True: + # read frame from provided source + (grabbed, frame) = stream.read() + # break if NoneType + if not grabbed: + break + + # do something with your OpenCV frame here + + # reducer frames size if you want more performance otherwise comment this line + frame = await reducer(frame, percentage=30) # reduce frame by 30% + # handle JPEG encoding + encodedImage = cv2.imencode(".jpg", frame)[1].tobytes() + # yield frame in byte format + yield (b"--frame\r\nContent-Type:video/jpeg2000\r\n\r\n" + encodedImage + b"\r\n") + await asyncio.sleep(0.00001) + # close stream + stream.release() + + +# create your own custom frame producer +async def my_frame_producer2(): + + # !!! define your second video source here !!! + # Open any video stream such as "foo2.mp4" + stream = cv2.VideoCapture("foo2.mp4") + # loop over frames + while True: + # read frame from provided source + (grabbed, frame) = stream.read() + # break if NoneType + if not grabbed: + break + + # do something with your OpenCV frame here + + # reducer frames size if you want more performance otherwise comment this line + frame = await reducer(frame, percentage=30) # reduce frame by 30% + # handle JPEG encoding + encodedImage = cv2.imencode(".jpg", frame)[1].tobytes() + # yield frame in byte format + yield (b"--frame\r\nContent-Type:video/jpeg2000\r\n\r\n" + encodedImage + b"\r\n") + await asyncio.sleep(0.00001) + # close stream + stream.release() + + +async def custom_video_response(scope): + """ + Return a async video streaming response for `my_frame_producer2` generator + """ + assert scope["type"] in ["http", "https"] + await asyncio.sleep(0.00001) + return StreamingResponse( + my_frame_producer2(), + media_type="multipart/x-mixed-replace; boundary=frame", + ) + + +# add your custom frame producer to config +web.config["generator"] = my_frame_producer1 + +# append new route i.e. new custom route with custom response +web.routes.append( + Route("/video2", endpoint=custom_video_response) + ) + +# run this app on Uvicorn server at address http://localhost:8000/ +uvicorn.run(web(), host="localhost", port=8000) + +# close app safely +web.shutdown() +``` + +!!! success "On successfully running this code, the output stream will be displayed at address http://localhost:8000/ in Browser." + + +  \ No newline at end of file diff --git a/docs/help/webgear_faqs.md b/docs/help/webgear_faqs.md index ca7e1b42d..e39194337 100644 --- a/docs/help/webgear_faqs.md +++ b/docs/help/webgear_faqs.md @@ -48,7 +48,7 @@ limitations under the License. ## Is it possible to stream on a different device on the network with WebGear? -!!! note "If you set `"0.0.0.0"` as host value instead of `"localhost"` on Host Machine, then you must still use http://localhost:8000/ to access stream on your host machine browser." +!!! alert "If you set `"0.0.0.0"` as host value instead of `"localhost"` on Host Machine, then you must still use http://localhost:8000/ to access stream on that same host machine browser." For accessing WebGear on different Client Devices on the network, use `"0.0.0.0"` as host value instead of `"localhost"` on Host Machine. Then type the IP-address of source machine followed by the defined `port` value in your desired Client Device's browser (for e.g. http://192.27.0.101:8000) to access the stream. diff --git a/docs/help/webgear_rtc_ex.md b/docs/help/webgear_rtc_ex.md new file mode 100644 index 000000000..894599957 --- /dev/null +++ b/docs/help/webgear_rtc_ex.md @@ -0,0 +1,213 @@ + + +# WebGear_RTC_RTC Examples + +  + +## Using WebGear_RTC with RaspberryPi Camera Module + +Because of WebGear_RTC API's flexible internal wapper around VideoGear, it can easily access any parameter of CamGear and PiGear videocapture APIs. + +!!! info "Following usage examples are just an idea of what can be done with WebGear_RTC API, you can try various [VideoGear](../../gears/videogear/params/), [CamGear](../../gears/camgear/params/) and [PiGear](../../gears/pigear/params/) parameters directly in WebGear_RTC API in the similar manner." + +Here's a bare-minimum example of using WebGear_RTC API with the Raspberry Pi camera module while tweaking its various properties in just one-liner: + +```python +# import libs +import uvicorn +from vidgear.gears.asyncio import WebGear_RTC + +# various webgear_rtc performance and Raspberry Pi camera tweaks +options = { + "frame_size_reduction": 25, + "hflip": True, + "exposure_mode": "auto", + "iso": 800, + "exposure_compensation": 15, + "awb_mode": "horizon", + "sensor_mode": 0, +} + +# initialize WebGear_RTC app +web = WebGear_RTC( + enablePiCamera=True, resolution=(640, 480), framerate=60, logging=True, **options +) + +# run this app on Uvicorn server at address http://localhost:8000/ +uvicorn.run(web(), host="localhost", port=8000) + +# close app safely +web.shutdown() +``` + +  + +## Using WebGear_RTC with real-time Video Stabilization enabled + +Here's an example of using WebGear_RTC API with real-time Video Stabilization enabled: + +```python +# import libs +import uvicorn +from vidgear.gears.asyncio import WebGear_RTC + +# various webgear_rtc performance tweaks +options = { + "frame_size_reduction": 25, +} + +# initialize WebGear_RTC app with a raw source and enable video stabilization(`stabilize=True`) +web = WebGear_RTC(source="foo.mp4", stabilize=True, logging=True, **options) + +# run this app on Uvicorn server at address http://localhost:8000/ +uvicorn.run(web(), host="localhost", port=8000) + +# close app safely +web.shutdown() +``` + +  + +## Display Two Sources Simultaneously in WebGear_RTC + +In this example, we'll be displaying two video feeds side-by-side simultaneously on browser using WebGear_RTC API by simply concatenating frames in real-time: + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +```python +# import necessary libs +import uvicorn, asyncio, cv2 +import numpy as np +from av import VideoFrame +from aiortc import VideoStreamTrack +from vidgear.gears.asyncio import WebGear_RTC +from vidgear.gears.asyncio.helper import reducer + +# initialize WebGear_RTC app without any source +web = WebGear_RTC(logging=True) + +# frame concatenator +def get_conc_frame(frame1, frame2): + h1, w1 = frame1.shape[:2] + h2, w2 = frame2.shape[:2] + + # create empty matrix + vis = np.zeros((max(h1, h2), w1 + w2, 3), np.uint8) + + # combine 2 frames + vis[:h1, :w1, :3] = frame1 + vis[:h2, w1 : w1 + w2, :3] = frame2 + + return vis + + +# create your own Bare-Minimum Custom Media Server +class Custom_RTCServer(VideoStreamTrack): + """ + Custom Media Server using OpenCV, an inherit-class + to aiortc's VideoStreamTrack. + """ + + def __init__(self, source1=None, source2=None): + + # don't forget this line! + super().__init__() + + # check is source are provided + if source1 is None or source2 is None: + raise ValueError("Provide both source") + + # initialize global params + # define both source here + self.stream1 = cv2.VideoCapture(source1) + self.stream2 = cv2.VideoCapture(source2) + + async def recv(self): + """ + A coroutine function that yields `av.frame.Frame`. + """ + # don't forget this function!!! + + # get next timestamp + pts, time_base = await self.next_timestamp() + + # read video frame + (grabbed1, frame1) = self.stream1.read() + (grabbed2, frame2) = self.stream2.read() + + # if NoneType + if not grabbed1 or not grabbed2: + return None + else: + print("Got frames") + + print(frame1.shape) + print(frame2.shape) + + # concatenate frame + frame = get_conc_frame(frame1, frame2) + + print(frame.shape) + + # reducer frames size if you want more performance otherwise comment this line + # frame = await reducer(frame, percentage=30) # reduce frame by 30% + + # contruct `av.frame.Frame` from `numpy.nd.array` + av_frame = VideoFrame.from_ndarray(frame, format="bgr24") + av_frame.pts = pts + av_frame.time_base = time_base + + # return `av.frame.Frame` + return av_frame + + def terminate(self): + """ + Gracefully terminates VideoGear stream + """ + # don't forget this function!!! + + # terminate + if not (self.stream1 is None): + self.stream1.release() + self.stream1 = None + + if not (self.stream2 is None): + self.stream2.release() + self.stream2 = None + + +# assign your custom media server to config with both adequate sources (for e.g. foo1.mp4 and foo2.mp4) +web.config["server"] = Custom_RTCServer( + source1="dance_videos/foo1.mp4", source2="dance_videos/foo2.mp4" +) + +# run this app on Uvicorn server at address http://localhost:8000/ +uvicorn.run(web(), host="localhost", port=8000) + +# close app safely +web.shutdown() +``` + +!!! success "On successfully running this code, the output stream will be displayed at address http://localhost:8000/ in Browser." + + +  \ No newline at end of file diff --git a/docs/help/writegear_ex.md b/docs/help/writegear_ex.md new file mode 100644 index 000000000..c505a55cb --- /dev/null +++ b/docs/help/writegear_ex.md @@ -0,0 +1,306 @@ + + + +# WriteGear Examples + +  + +## Using WriteGear's Compression Mode for YouTube-Live Streaming + +!!! new "New in v0.2.1" + This example was added in `v0.2.1`. + +!!! alert "This example assume you already have a [**YouTube Account with Live-Streaming enabled**](https://support.google.com/youtube/answer/2474026#enable) for publishing video." + +!!! danger "Make sure to change [_YouTube-Live Stream Key_](https://support.google.com/youtube/answer/2907883#zippy=%2Cstart-live-streaming-now) with yours in following code before running!" + +```python +# import required libraries +from vidgear.gears import CamGear +from vidgear.gears import WriteGear +import cv2 + +# define video source +VIDEO_SOURCE = "/home/foo/foo.mp4" + +# Open stream +stream = CamGear(source=VIDEO_SOURCE, logging=True).start() + +# define required FFmpeg optimizing parameters for your writer +# [NOTE]: Added VIDEO_SOURCE as audio-source, since YouTube rejects audioless streams! +output_params = { + "-i": VIDEO_SOURCE, + "-acodec": "aac", + "-ar": 44100, + "-b:a": 712000, + "-vcodec": "libx264", + "-preset": "medium", + "-b:v": "4500k", + "-bufsize": "512k", + "-pix_fmt": "yuv420p", + "-f": "flv", +} + +# [WARNING] Change your YouTube-Live Stream Key here: +YOUTUBE_STREAM_KEY = "xxxx-xxxx-xxxx-xxxx-xxxx" + +# Define writer with defined parameters and +writer = WriteGear( + output_filename="rtmp://a.rtmp.youtube.com/live2/{}".format(YOUTUBE_STREAM_KEY), + logging=True, + **output_params +) + +# loop over +while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # write frame to writer + writer.write(frame) + +# safely close video stream +stream.stop() + +# safely close writer +writer.close() +``` + +  + + +## Using WriteGear's Compression Mode creating MP4 segments from a video stream + +!!! new "New in v0.2.1" + This example was added in `v0.2.1`. + +```python +# import required libraries +from vidgear.gears import VideoGear +from vidgear.gears import WriteGear +import cv2 + +# Open any video source `foo.mp4` +stream = VideoGear( + source="foo.mp4", logging=True +).start() + +# define required FFmpeg optimizing parameters for your writer +output_params = { + "-c:v": "libx264", + "-crf": 22, + "-map": 0, + "-segment_time": 9, + "-g": 9, + "-sc_threshold": 0, + "-force_key_frames": "expr:gte(t,n_forced*9)", + "-clones": ["-f", "segment"], +} + +# Define writer with defined parameters +writer = WriteGear(output_filename="output%03d.mp4", logging=True, **output_params) + +# loop over +while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # write frame to writer + writer.write(frame) + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + +# close output window +cv2.destroyAllWindows() + +# safely close video stream +stream.stop() + +# safely close writer +writer.close() +``` + +  + + +## Using WriteGear's Compression Mode to add external audio file input to video frames + +!!! new "New in v0.2.1" + This example was added in `v0.2.1`. + +!!! failure "Make sure this `-i` audio-source it compatible with provided video-source, otherwise you encounter multiple errors or no output at all." + +```python +# import required libraries +from vidgear.gears import CamGear +from vidgear.gears import WriteGear +import cv2 + +# open any valid video stream(for e.g `foo_video.mp4` file) +stream = CamGear(source="foo_video.mp4").start() + +# add various parameters, along with custom audio +stream_params = { + "-input_framerate": stream.framerate, # controlled framerate for audio-video sync !!! don't forget this line !!! + "-i": "foo_audio.aac", # assigns input audio-source: "foo_audio.aac" +} + +# Define writer with defined parameters +writer = WriteGear(output_filename="Output.mp4", logging=True, **stream_params) + +# loop over +while True: + + # read frames from stream + frame = stream.read() + + # check for frame if Nonetype + if frame is None: + break + + # {do something with the frame here} + + # write frame to writer + writer.write(frame) + + # Show output window + cv2.imshow("Output Frame", frame) + + # check for 'q' key if pressed + key = cv2.waitKey(1) & 0xFF + if key == ord("q"): + break + +# close output window +cv2.destroyAllWindows() + +# safely close video stream +stream.stop() + +# safely close writer +writer.close() +``` + +  + + +## Using WriteGear with ROS(Robot Operating System) + +We will be using [`cv_bridge`](http://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython) to convert OpenCV frames to ROS image messages and vice-versa. + +In this example, we'll create a node that listens to a ROS image message topic, converts the recieved images messages into OpenCV frames, draws a circle on it, and then process these frames into a lossless compressed file format in real-time. + +!!! new "New in v0.2.2" + This example was added in `v0.2.2`. + +!!! note "This example is vidgear implementation of this [wiki example](http://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython)." + +```python +# import roslib +import roslib + +roslib.load_manifest("my_package") + +# import other required libraries +import sys +import rospy +import cv2 +from std_msgs.msg import String +from sensor_msgs.msg import Image +from cv_bridge import CvBridge, CvBridgeError +from vidgear.gears import WriteGear + +# custom publisher class +class image_subscriber: + def __init__(self, output_filename="Output.mp4"): + # create CV bridge + self.bridge = CvBridge() + # define publisher topic + self.image_pub = rospy.Subscriber("image_topic_sub", Image, self.callback) + # Define writer with default parameters + self.writer = WriteGear(output_filename=output_filename) + + def callback(self, data): + # convert recieved data to frame + try: + cv_image = self.bridge.imgmsg_to_cv2(data, "bgr8") + except CvBridgeError as e: + print(e) + + # check if frame is valid + if cv_image: + + # {do something with the frame here} + + # add circle + (rows, cols, channels) = cv_image.shape + if cols > 60 and rows > 60: + cv2.circle(cv_image, (50, 50), 10, 255) + + # write frame to writer + writer.write(frame) + + def close(self): + # safely close video stream + self.writer.close() + + +def main(args): + # define publisher with suitable output filename + # such as `Output.mp4` for saving output + ic = image_subscriber(output_filename="Output.mp4") + # initiate ROS node on publisher + rospy.init_node("image_subscriber", anonymous=True) + try: + # run node + rospy.spin() + except KeyboardInterrupt: + print("Shutting down") + finally: + # close publisher + ic.close() + + +if __name__ == "__main__": + main(sys.argv) +``` + +  \ No newline at end of file diff --git a/docs/help/writegear_faqs.md b/docs/help/writegear_faqs.md index 53fe2950c..bb2764b2c 100644 --- a/docs/help/writegear_faqs.md +++ b/docs/help/writegear_faqs.md @@ -39,10 +39,8 @@ limitations under the License. **Answer:** WriteGear will exit with `ValueError` if you feed frames of different dimensions or channels. -   - ## How to install and configure FFmpeg correctly for WriteGear on my machine? **Answer:** Follow these [Installation Instructions ➶](../../gears/writegear/compression/advanced/ffmpeg_install/) for its installation. @@ -109,205 +107,21 @@ limitations under the License. ## Is YouTube-Live Streaming possibe with WriteGear? -**Answer:** Yes, See example below: - -!!! new "New in v0.2.1" - This example was added in `v0.2.1`. - -!!! alert "This example assume you already have a [**YouTube Account with Live-Streaming enabled**](https://support.google.com/youtube/answer/2474026#enable) for publishing video." - -!!! danger "Make sure to change [_YouTube-Live Stream Key_](https://support.google.com/youtube/answer/2907883#zippy=%2Cstart-live-streaming-now) with yours in following code before running!" - -```python -# import required libraries -from vidgear.gears import CamGear -from vidgear.gears import WriteGear -import cv2 - -# define video source -VIDEO_SOURCE = "/home/foo/foo.mp4" - -# Open stream -stream = CamGear(source=VIDEO_SOURCE, logging=True).start() - -# define required FFmpeg optimizing parameters for your writer -# [NOTE]: Added VIDEO_SOURCE as audio-source, since YouTube rejects audioless streams! -output_params = { - "-i": VIDEO_SOURCE, - "-acodec": "aac", - "-ar": 44100, - "-b:a": 712000, - "-vcodec": "libx264", - "-preset": "medium", - "-b:v": "4500k", - "-bufsize": "512k", - "-pix_fmt": "yuv420p", - "-f": "flv", -} - -# [WARNING] Change your YouTube-Live Stream Key here: -YOUTUBE_STREAM_KEY = "xxxx-xxxx-xxxx-xxxx-xxxx" - -# Define writer with defined parameters and -writer = WriteGear( - output_filename="rtmp://a.rtmp.youtube.com/live2/{}".format(YOUTUBE_STREAM_KEY), - logging=True, - **output_params -) - -# loop over -while True: - - # read frames from stream - frame = stream.read() - - # check for frame if Nonetype - if frame is None: - break - - # {do something with the frame here} - - # write frame to writer - writer.write(frame) - -# safely close video stream -stream.stop() - -# safely close writer -writer.close() -``` +**Answer:** Yes, See [this bonus example ➶](../writegear_ex/#using-writegears-compression-mode-for-youtube-live-streaming).   ## How to create MP4 segments from a video stream with WriteGear? -**Answer:** See example below: - -!!! new "New in v0.2.1" - This example was added in `v0.2.1`. - -```python -# import required libraries -from vidgear.gears import VideoGear -from vidgear.gears import WriteGear -import cv2 - -# Open any video source `foo.mp4` -stream = VideoGear( - source="foo.mp4", logging=True -).start() - -# define required FFmpeg optimizing parameters for your writer -output_params = { - "-c:v": "libx264", - "-crf": 22, - "-map": 0, - "-segment_time": 9, - "-g": 9, - "-sc_threshold": 0, - "-force_key_frames": "expr:gte(t,n_forced*9)", - "-clones": ["-f", "segment"], -} - -# Define writer with defined parameters -writer = WriteGear(output_filename="output%03d.mp4", logging=True, **output_params) - -# loop over -while True: - - # read frames from stream - frame = stream.read() - - # check for frame if Nonetype - if frame is None: - break - - # {do something with the frame here} - - # write frame to writer - writer.write(frame) - - # Show output window - cv2.imshow("Output Frame", frame) - - # check for 'q' key if pressed - key = cv2.waitKey(1) & 0xFF - if key == ord("q"): - break - -# close output window -cv2.destroyAllWindows() - -# safely close video stream -stream.stop() - -# safely close writer -writer.close() -``` +**Answer:** See [this bonus example ➶](../writegear_ex/#using-writegears-compression-mode-creating-mp4-segments-from-a-video-stream).   ## How add external audio file input to video frames? -**Answer:** See example below: - -!!! new "New in v0.2.1" - This example was added in `v0.2.1`. - -!!! failure "Make sure this `-i` audio-source it compatible with provided video-source, otherwise you encounter multiple errors or no output at all." - -```python -# import required libraries -from vidgear.gears import CamGear -from vidgear.gears import WriteGear -import cv2 - -# open any valid video stream(for e.g `foo_video.mp4` file) -stream = CamGear(source="foo_video.mp4").start() - -# add various parameters, along with custom audio -stream_params = { - "-input_framerate": stream.framerate, # controlled framerate for audio-video sync !!! don't forget this line !!! - "-i": "foo_audio.aac", # assigns input audio-source: "foo_audio.aac" -} - -# Define writer with defined parameters -writer = WriteGear(output_filename="Output.mp4", logging=True, **stream_params) - -# loop over -while True: - - # read frames from stream - frame = stream.read() - - # check for frame if Nonetype - if frame is None: - break - - # {do something with the frame here} - - # write frame to writer - writer.write(frame) - - # Show output window - cv2.imshow("Output Frame", frame) - - # check for 'q' key if pressed - key = cv2.waitKey(1) & 0xFF - if key == ord("q"): - break - -# close output window -cv2.destroyAllWindows() - -# safely close video stream -stream.stop() - -# safely close writer -writer.close() -``` +**Answer:** See [this bonus example ➶](../writegear_ex/#using-writegears-compression-mode-to-add-external-audio-file-input-to-video-frames).   diff --git a/docs/installation/pip_install.md b/docs/installation/pip_install.md index 8d6af3436..dfb8c2a3e 100644 --- a/docs/installation/pip_install.md +++ b/docs/installation/pip_install.md @@ -29,6 +29,62 @@ limitations under the License. When installing VidGear with [pip](https://pip.pypa.io/en/stable/installing/), you need to check manually if following dependencies are installed: +???+ alert "Upgrade your `pip`" + + It strongly advised to upgrade to latest `pip` before installing vidgear to avoid any undesired installation error(s). There are two mechanisms to upgrade `pip`: + + 1. **`ensurepip`:** Python comes with an [`ensurepip`](https://docs.python.org/3/library/ensurepip.html#module-ensurepip) module[^1], which can easily upgrade/install `pip` in any Python environment. + + === "Linux/MacOS" + + ```sh + python -m ensurepip --upgrade + + ``` + + === "Windows" + + ```sh + py -m ensurepip --upgrade + + ``` + 2. **`pip`:** Use can also use existing `pip` to upgrade itself: + + ??? info "Install `pip` if not present" + + * Download the script, from https://bootstrap.pypa.io/get-pip.py. + * Open a terminal/command prompt, `cd` to the folder containing the `get-pip.py` file and run: + + === "Linux/MacOS" + + ```sh + python get-pip.py + + ``` + + === "Windows" + + ```sh + py get-pip.py + + ``` + More details about this script can be found in [pypa/get-pip’s README](https://github.com/pypa/get-pip). + + + === "Linux/MacOS" + + ```sh + python -m pip install pip --upgrade + + ``` + + === "Windows" + + ```sh + py -m pip install pip --upgrade + + ``` + ### Core Prerequisites * #### OpenCV @@ -50,11 +106,12 @@ When installing VidGear with [pip](https://pip.pypa.io/en/stable/installing/), y pip install opencv-python ``` + ### API Specific Prerequisites * #### FFmpeg - Require for the video compression and encoding compatibilities within [**StreamGear**](#streamgear) API and [**WriteGear API's Compression Mode**](../../gears/writegear/compression/overview/). + Require only for the video compression and encoding compatibility within [**StreamGear API**](../../gears/streamgear/overview/) API and [**WriteGear API's Compression Mode**](../../gears/writegear/compression/overview/). !!! tip "FFmpeg Installation" @@ -77,7 +134,7 @@ When installing VidGear with [pip](https://pip.pypa.io/en/stable/installing/), y ??? error "Microsoft Visual C++ 14.0 is required." - Installing `aiortc` on windows requires Microsoft Build Tools for Visual C++ libraries installed. You can easily fix this error by installing any **ONE** of these choices: + Installing `aiortc` on windows may sometimes require Microsoft Build Tools for Visual C++ libraries installed. You can easily fix this error by installing any **ONE** of these choices: !!! info "While the error is calling for VC++ 14.0 - but newer versions of Visual C++ libraries works as well." @@ -126,7 +183,7 @@ When installing VidGear with [pip](https://pip.pypa.io/en/stable/installing/), y python -m pip install vidgear[asyncio] ``` - If you don't have the privileges to the directory you're installing package. Then use `--user` flag, that makes pip install packages in your home directory instead: + And, If you don't have the privileges to the directory you're installing package. Then use `--user` flag, that makes pip install packages in your home directory instead: ``` sh python -m pip install --user vidgear @@ -135,12 +192,66 @@ When installing VidGear with [pip](https://pip.pypa.io/en/stable/installing/), y python -m pip install --user vidgear[asyncio] ``` + Or, If you're using `py` as alias for installed python, then: + + ``` sh + py -m pip install --user vidgear + + # or with asyncio support + py -m pip install --user vidgear[asyncio] + ``` + +??? experiment "Installing vidgear with only selective dependencies" + + Starting with version `v0.2.2`, you can now run any VidGear API by installing only just specific dependencies required by the API in use(except for some Core dependencies). + + This is useful when you want to manually review, select and install minimal API-specific dependencies on bare-minimum vidgear from scratch on your system: + + - To install bare-minimum vidgear without any dependencies, use [`--no-deps`](https://pip.pypa.io/en/stable/cli/pip_install/#cmdoption-no-deps) pip flag as follows: + + ```sh + # Install stable release without any dependencies + pip install --no-deps --upgrade vidgear + ``` + + - Then, you must install all **Core dependencies**: + + ```sh + # Install core dependencies + pip install cython, numpy, requests, tqdm, colorlog + + # Install opencv(only if not installed previously) + pip install opencv-python + ``` + + - Finally, manually install your **API-specific dependencies** as required by your API(in use): + + + | APIs | Dependencies | + |:---:|:---| + | CamGear | `pafy`, `youtube-dl`, `streamlink` | + | PiGear | `picamera` | + | VideoGear | - | + | ScreenGear | `mss`, `pyscreenshot`, `Pillow` | + | WriteGear | **FFmpeg:** See [this doc ➶](https://abhitronix.github.io/vidgear/v0.2.2-dev/gears/writegear/compression/advanced/ffmpeg_install/#ffmpeg-installation-instructions) | + | StreamGear | **FFmpeg:** See [this doc ➶](https://abhitronix.github.io/vidgear/v0.2.2-dev/gears/streamgear/ffmpeg_install/#ffmpeg-installation-instructions) | + | NetGear | `pyzmq`, `simplejpeg` | + | WebGear | `starlette`, `jinja2`, `uvicorn`, `simplejpeg` | + | WebGear_RTC | `aiortc`, `starlette`, `jinja2`, `uvicorn` | + | NetGear_Async | `pyzmq`, `msgpack`, `msgpack_numpy`, `uvloop` | + + ```sh + # Just copy-&-paste from above table + pip install + ``` + + ```sh -# Install stable release -pip install vidgear +# Install latest stable release +pip install -U vidgear -# Or Install stable release with Asyncio support -pip install vidgear[asyncio] +# Or Install latest stable release with Asyncio support +pip install -U vidgear[asyncio] ``` **And if you prefer to install VidGear directly from the repository:** @@ -162,3 +273,5 @@ pip install vidgear-0.2.2-py3-none-any.whl[asyncio] ```   + +[^1]: :warning: The `ensurepip` module is missing/disabled on Ubuntu. Use second method. \ No newline at end of file diff --git a/docs/installation/source_install.md b/docs/installation/source_install.md index f71a97e09..9f1e2cee3 100644 --- a/docs/installation/source_install.md +++ b/docs/installation/source_install.md @@ -31,13 +31,69 @@ When installing VidGear from source, FFmpeg and Aiortc are the only two API spec !!! question "What about rest of the dependencies?" Any other python dependencies _(Core/API specific)_ will be automatically installed based on your OS specifications. + +???+ alert "Upgrade your `pip`" + + It strongly advised to upgrade to latest `pip` before installing vidgear to avoid any undesired installation error(s). There are two mechanisms to upgrade `pip`: + + 1. **`ensurepip`:** Python comes with an [`ensurepip`](https://docs.python.org/3/library/ensurepip.html#module-ensurepip) module[^1], which can easily upgrade/install `pip` in any Python environment. + + === "Linux/MacOS" + + ```sh + python -m ensurepip --upgrade + + ``` + + === "Windows" + + ```sh + py -m ensurepip --upgrade + + ``` + 2. **`pip`:** Use can also use existing `pip` to upgrade itself: + + ??? info "Install `pip` if not present" + + * Download the script, from https://bootstrap.pypa.io/get-pip.py. + * Open a terminal/command prompt, `cd` to the folder containing the `get-pip.py` file and run: + + === "Linux/MacOS" + + ```sh + python get-pip.py + + ``` + + === "Windows" + + ```sh + py get-pip.py + + ``` + More details about this script can be found in [pypa/get-pip’s README](https://github.com/pypa/get-pip). + + + === "Linux/MacOS" + + ```sh + python -m pip install pip --upgrade + + ``` + + === "Windows" + + ```sh + py -m pip install pip --upgrade + + ``` ### API Specific Prerequisites * #### FFmpeg - Require for the video compression and encoding compatibilities within [**StreamGear**](#streamgear) API and [**WriteGear API's Compression Mode**](../../gears/writegear/compression/overview/). + Require only for the video compression and encoding compatibility within [**StreamGear API**](../../gears/streamgear/overview/) API and [**WriteGear API's Compression Mode**](../../gears/writegear/compression/overview/). !!! tip "FFmpeg Installation" @@ -50,7 +106,7 @@ When installing VidGear from source, FFmpeg and Aiortc are the only two API spec ??? error "Microsoft Visual C++ 14.0 is required." - Installing `aiortc` on windows requires Microsoft Build Tools for Visual C++ libraries installed. You can easily fix this error by installing any **ONE** of these choices: + Installing `aiortc` on windows may sometimes requires Microsoft Build Tools for Visual C++ libraries installed. You can easily fix this error by installing any **ONE** of these choices: !!! info "While the error is calling for VC++ 14.0 - but newer versions of Visual C++ libraries works as well." @@ -85,7 +141,23 @@ When installing VidGear from source, FFmpeg and Aiortc are the only two API spec * Use following commands to clone and install VidGear: - ```sh + ```sh + # clone the repository and get inside + git clone https://github.com/abhiTronix/vidgear.git && cd vidgear + + # checkout the latest testing branch + git checkout testing + + # install normally + python -m pip install . + + # OR install with asyncio support + python - m pip install .[asyncio] + ``` + + * If you're using `py` as alias for installed python, then: + + ``` sh # clone the repository and get inside git clone https://github.com/abhiTronix/vidgear.git && cd vidgear @@ -97,7 +169,57 @@ When installing VidGear from source, FFmpeg and Aiortc are the only two API spec # OR install with asyncio support python - m pip install .[asyncio] - ``` + ``` + +??? experiment "Installing vidgear with only selective dependencies" + + Starting with version `v0.2.2`, you can now run any VidGear API by installing only just specific dependencies required by the API in use(except for some Core dependencies). + + This is useful when you want to manually review, select and install minimal API-specific dependencies on bare-minimum vidgear from scratch on your system: + + - To install bare-minimum vidgear without any dependencies, use [`--no-deps`](https://pip.pypa.io/en/stable/cli/pip_install/#cmdoption-no-deps) pip flag as follows: + + ```sh + # clone the repository and get inside + git clone https://github.com/abhiTronix/vidgear.git && cd vidgear + + # checkout the latest testing branch + git checkout testing + + # Install without any dependencies + pip install --no-deps . + ``` + + - Then, you must install all **Core dependencies**: + + ```sh + # Install core dependencies + pip install cython, numpy, requests, tqdm, colorlog + + # Install opencv(only if not installed previously) + pip install opencv-python + ``` + + - Finally, manually install your **API-specific dependencies** as required by your API(in use): + + + | APIs | Dependencies | + |:---:|:---| + | CamGear | `pafy`, `youtube-dl`, `streamlink` | + | PiGear | `picamera` | + | VideoGear | - | + | ScreenGear | `mss`, `pyscreenshot`, `Pillow` | + | WriteGear | **FFmpeg:** See [this doc ➶](https://abhitronix.github.io/vidgear/v0.2.2-dev/gears/writegear/compression/advanced/ffmpeg_install/#ffmpeg-installation-instructions) | + | StreamGear | **FFmpeg:** See [this doc ➶](https://abhitronix.github.io/vidgear/v0.2.2-dev/gears/streamgear/ffmpeg_install/#ffmpeg-installation-instructions) | + | NetGear | `pyzmq`, `simplejpeg` | + | WebGear | `starlette`, `jinja2`, `uvicorn`, `simplejpeg` | + | WebGear_RTC | `aiortc`, `starlette`, `jinja2`, `uvicorn` | + | NetGear_Async | `pyzmq`, `msgpack`, `msgpack_numpy`, `uvloop` | + + ```sh + # Just copy-&-paste from above table + pip install + ``` ```sh # clone the repository and get inside @@ -123,3 +245,6 @@ pip install git+git://github.com/abhiTronix/vidgear@testing#egg=vidgear[asyncio] ```   + + +[^1]: The `ensurepip` module was added to the Python standard library in Python 3.4. diff --git a/docs/overrides/assets/images/stream_tweak.png b/docs/overrides/assets/images/stream_tweak.png new file mode 100644 index 000000000..6a956fd32 Binary files /dev/null and b/docs/overrides/assets/images/stream_tweak.png differ diff --git a/docs/overrides/assets/javascripts/extra.js b/docs/overrides/assets/javascripts/extra.js index 65c96542c..a09882de3 100755 --- a/docs/overrides/assets/javascripts/extra.js +++ b/docs/overrides/assets/javascripts/extra.js @@ -17,6 +17,8 @@ See the License for the specific language governing permissions and limitations under the License. =============================================== */ + +// DASH StreamGear demo var player_dash = new Clappr.Player({ source: 'https://rawcdn.githack.com/abhiTronix/vidgear-docs-additionals/dca65250d95eeeb87d594686c2f2c2208a015486/streamgear_video_segments/DASH/streamgear_dash.mpd', plugins: [DashShakaPlayback, LevelSelector], @@ -46,6 +48,7 @@ var player_dash = new Clappr.Player({ preload: 'metadata', }); +// HLS StremGear demo var player_hls = new Clappr.Player({ source: 'https://rawcdn.githack.com/abhiTronix/vidgear-docs-additionals/abc0c193ab26e21f97fa30c9267de6beb8a72295/streamgear_video_segments/HLS/streamgear_hls.m3u8', plugins: [HlsjsPlayback, LevelSelector], @@ -81,6 +84,7 @@ var player_hls = new Clappr.Player({ preload: 'metadata', }); +// DASH Stabilizer demo var player_stab = new Clappr.Player({ source: 'https://rawcdn.githack.com/abhiTronix/vidgear-docs-additionals/fbcf0377b171b777db5e0b3b939138df35a90676/stabilizer_video_chunks/stabilizer_dash.mpd', plugins: [DashShakaPlayback], @@ -97,4 +101,9 @@ var player_stab = new Clappr.Player({ parentId: '#player_stab', poster: 'https://rawcdn.githack.com/abhiTronix/vidgear-docs-additionals/94bf767c28bf2fe61b9c327625af8e22745f9fdf/stabilizer_video_chunks/hd_thumbnail_2.png', preload: 'metadata', -}); \ No newline at end of file +}); + +// gitter sidecard +((window.gitter = {}).chat = {}).options = { + room: 'vidgear/community' +}; \ No newline at end of file diff --git a/docs/overrides/assets/stylesheets/custom.css b/docs/overrides/assets/stylesheets/custom.css index 62d2fbac2..a04895b69 100755 --- a/docs/overrides/assets/stylesheets/custom.css +++ b/docs/overrides/assets/stylesheets/custom.css @@ -22,7 +22,7 @@ limitations under the License. --md-admonition-icon--new: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23000000' d='M13 2V3H12V9H11V10H9V11H8V12H7V13H5V12H4V11H3V9H2V15H3V16H4V17H5V18H6V22H8V21H7V20H8V19H9V18H10V19H11V22H13V21H12V17H13V16H14V15H15V12H16V13H17V11H15V9H20V8H17V7H22V3H21V2M14 3H15V4H14Z' /%3E%3C/svg%3E"); --md-admonition-icon--alert: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23000000' d='M6,6.9L3.87,4.78L5.28,3.37L7.4,5.5L6,6.9M13,1V4H11V1H13M20.13,4.78L18,6.9L16.6,5.5L18.72,3.37L20.13,4.78M4.5,10.5V12.5H1.5V10.5H4.5M19.5,10.5H22.5V12.5H19.5V10.5M6,20H18A2,2 0 0,1 20,22H4A2,2 0 0,1 6,20M12,5A6,6 0 0,1 18,11V19H6V11A6,6 0 0,1 12,5Z' /%3E%3C/svg%3E"); --md-admonition-icon--xquote: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23000000' d='M20 2H4C2.9 2 2 2.9 2 4V16C2 17.1 2.9 18 4 18H8V21C8 21.6 8.4 22 9 22H9.5C9.7 22 10 21.9 10.2 21.7L13.9 18H20C21.1 18 22 17.1 22 16V4C22 2.9 21.1 2 20 2M11 13H7V8.8L8.3 6H10.3L8.9 9H11V13M17 13H13V8.8L14.3 6H16.3L14.9 9H17V13Z' /%3E%3C/svg%3E"); - --md-admonition-icon--xwarning: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23000000' d='M13 13H11V7H13M11 15H13V17H11M15.73 3H8.27L3 8.27V15.73L8.27 21H15.73L21 15.73V8.27L15.73 3Z' /%3E%3C/svg%3E"); + --md-admonition-icon--xwarning: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath d='M23,12L20.56,9.22L20.9,5.54L17.29,4.72L15.4,1.54L12,3L8.6,1.54L6.71,4.72L3.1,5.53L3.44,9.21L1,12L3.44,14.78L3.1,18.47L6.71,19.29L8.6,22.47L12,21L15.4,22.46L17.29,19.28L20.9,18.46L20.56,14.78L23,12M13,17H11V15H13V17M13,13H11V7H13V13Z' /%3E%3C/svg%3E"); --md-admonition-icon--xdanger: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23000000' d='M12,2A9,9 0 0,0 3,11C3,14.03 4.53,16.82 7,18.47V22H9V19H11V22H13V19H15V22H17V18.46C19.47,16.81 21,14 21,11A9,9 0 0,0 12,2M8,11A2,2 0 0,1 10,13A2,2 0 0,1 8,15A2,2 0 0,1 6,13A2,2 0 0,1 8,11M16,11A2,2 0 0,1 18,13A2,2 0 0,1 16,15A2,2 0 0,1 14,13A2,2 0 0,1 16,11M12,14L13.5,17H10.5L12,14Z' /%3E%3C/svg%3E"); --md-admonition-icon--xtip: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23000000' d='M12,6A6,6 0 0,1 18,12C18,14.22 16.79,16.16 15,17.2V19A1,1 0 0,1 14,20H10A1,1 0 0,1 9,19V17.2C7.21,16.16 6,14.22 6,12A6,6 0 0,1 12,6M14,21V22A1,1 0 0,1 13,23H11A1,1 0 0,1 10,22V21H14M20,11H23V13H20V11M1,11H4V13H1V11M13,1V4H11V1H13M4.92,3.5L7.05,5.64L5.63,7.05L3.5,4.93L4.92,3.5M16.95,5.63L19.07,3.5L20.5,4.93L18.37,7.05L16.95,5.63Z' /%3E%3C/svg%3E"); --md-admonition-icon--xfail: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23000000' d='M8.27,3L3,8.27V15.73L8.27,21H15.73L21,15.73V8.27L15.73,3M8.41,7L12,10.59L15.59,7L17,8.41L13.41,12L17,15.59L15.59,17L12,13.41L8.41,17L7,15.59L10.59,12L7,8.41' /%3E%3C/svg%3E"); @@ -33,173 +33,222 @@ limitations under the License. --md-admonition-icon--xabstract: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23000000' d='M3,3H21V5H3V3M3,7H15V9H3V7M3,11H21V13H3V11M3,15H15V17H3V15M3,19H21V21H3V19Z' /%3E%3C/svg%3E"); --md-admonition-icon--xnote: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23000000' d='M20.71,7.04C20.37,7.38 20.04,7.71 20.03,8.04C20,8.36 20.34,8.69 20.66,9C21.14,9.5 21.61,9.95 21.59,10.44C21.57,10.93 21.06,11.44 20.55,11.94L16.42,16.08L15,14.66L19.25,10.42L18.29,9.46L16.87,10.87L13.12,7.12L16.96,3.29C17.35,2.9 18,2.9 18.37,3.29L20.71,5.63C21.1,6 21.1,6.65 20.71,7.04M3,17.25L12.56,7.68L16.31,11.43L6.75,21H3V17.25Z' /%3E%3C/svg%3E"); --md-admonition-icon--xinfo: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23000000' d='M18 2H12V9L9.5 7.5L7 9V2H6C4.9 2 4 2.9 4 4V20C4 21.1 4.9 22 6 22H18C19.1 22 20 21.1 20 20V4C20 2.89 19.1 2 18 2M17.68 18.41C17.57 18.5 16.47 19.25 16.05 19.5C15.63 19.79 14 20.72 14.26 18.92C14.89 15.28 16.11 13.12 14.65 14.06C14.27 14.29 14.05 14.43 13.91 14.5C13.78 14.61 13.79 14.6 13.68 14.41S13.53 14.23 13.67 14.13C13.67 14.13 15.9 12.34 16.72 12.28C17.5 12.21 17.31 13.17 17.24 13.61C16.78 15.46 15.94 18.15 16.07 18.54C16.18 18.93 17 18.31 17.44 18C17.44 18 17.5 17.93 17.61 18.05C17.72 18.22 17.83 18.3 17.68 18.41M16.97 11.06C16.4 11.06 15.94 10.6 15.94 10.03C15.94 9.46 16.4 9 16.97 9C17.54 9 18 9.46 18 10.03C18 10.6 17.54 11.06 16.97 11.06Z' /%3E%3C/svg%3E"); + --md-admonition-icon--xadvance: url("data:image/svg+xml,%3C%3Fxml version='1.0' encoding='UTF-8'%3F%3E%3C!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'%3E%3Csvg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' version='1.1' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath d='M7,2V4H8V18A4,4 0 0,0 12,22A4,4 0 0,0 16,18V4H17V2H7M11,16C10.4,16 10,15.6 10,15C10,14.4 10.4,14 11,14C11.6,14 12,14.4 12,15C12,15.6 11.6,16 11,16M13,12C12.4,12 12,11.6 12,11C12,10.4 12.4,10 13,10C13.6,10 14,10.4 14,11C14,11.6 13.6,12 13,12M14,7H10V4H14V7Z' /%3E%3C/svg%3E"); } + +.md-typeset .admonition.advance, +.md-typeset details.advance { + border-color: rgb(27, 77, 62); +} + .md-typeset .admonition.new, .md-typeset details.new { - border-color: rgb(43, 155, 70); + border-color: rgb(57,255,20); +} + +.md-typeset .admonition.alert, +.md-typeset details.alert { + border-color: rgb(255, 0, 255); } + .md-typeset .new > .admonition-title, .md-typeset .new > summary { - background-color: rgba(43, 155, 70, 0.1); - border-color: rgb(43, 155, 70); + background-color: rgb(57,255,20,0.1); + border-color: rgb(57,255,20); } + .md-typeset .new > .admonition-title::before, .md-typeset .new > summary::before { - background-color: rgb(43, 155, 70); + background-color: rgb(57,255,20); -webkit-mask-image: var(--md-admonition-icon--new); mask-image: var(--md-admonition-icon--new); } -.md-typeset .admonition.alert, -.md-typeset details.alert { - border-color: rgb(255, 0, 255); -} + .md-typeset .alert > .admonition-title, .md-typeset .alert > summary { - background-color: rgba(255, 0, 255), 0.1); - border-color: rgb(255, 0, 255)); + background-color: rgba(255, 0, 255, 0.1); + border-color: rgb(255, 0, 255); } + .md-typeset .alert > .admonition-title::before, .md-typeset .alert > summary::before { - background-color: rgb(255, 0, 255)); + background-color: rgb(255, 0, 255); -webkit-mask-image: var(--md-admonition-icon--alert); mask-image: var(--md-admonition-icon--alert); } -.md-typeset .attention>.admonition-title::before, -.md-typeset .attention>summary::before, -.md-typeset .caution>.admonition-title::before, -.md-typeset .caution>summary::before, -.md-typeset .warning>.admonition-title::before, -.md-typeset .warning>summary::before { + +.md-typeset .advance > .admonition-title, +.md-typeset .advance > summary, +.md-typeset .experiment > .admonition-title, +.md-typeset .experiment > summary { + background-color: rgba(0, 57, 166, 0.1); + border-color: rgb(0, 57, 166); +} + +.md-typeset .advance > .admonition-title::before, +.md-typeset .advance > summary::before, +.md-typeset .experiment > .admonition-title::before, +.md-typeset .experiment > summary::before { + background-color: rgb(0, 57, 166); + -webkit-mask-image: var(--md-admonition-icon--xadvance); + mask-image: var(--md-admonition-icon--xadvance); +} + +.md-typeset .attention > .admonition-title::before, +.md-typeset .attention > summary::before, +.md-typeset .caution > .admonition-title::before, +.md-typeset .caution > summary::before, +.md-typeset .warning > .admonition-title::before, +.md-typeset .warning > summary::before { -webkit-mask-image: var(--md-admonition-icon--xwarning); mask-image: var(--md-admonition-icon--xwarning); } -.md-typeset .hint>.admonition-title::before, -.md-typeset .hint>summary::before, -.md-typeset .important>.admonition-title::before, -.md-typeset .important>summary::before, -.md-typeset .tip>.admonition-title::before, -.md-typeset .tip>summary::before { + +.md-typeset .hint > .admonition-title::before, +.md-typeset .hint > summary::before, +.md-typeset .important > .admonition-title::before, +.md-typeset .important > summary::before, +.md-typeset .tip > .admonition-title::before, +.md-typeset .tip > summary::before { -webkit-mask-image: var(--md-admonition-icon--xtip) !important; mask-image: var(--md-admonition-icon--xtip) !important; } -.md-typeset .info>.admonition-title::before, -.md-typeset .info>summary::before, -.md-typeset .todo>.admonition-title::before, -.md-typeset .todo>summary::before { + +.md-typeset .info > .admonition-title::before, +.md-typeset .info > summary::before, +.md-typeset .todo > .admonition-title::before, +.md-typeset .todo > summary::before { -webkit-mask-image: var(--md-admonition-icon--xinfo); mask-image: var(--md-admonition-icon--xinfo); } -.md-typeset .danger>.admonition-title::before, -.md-typeset .danger>summary::before, -.md-typeset .error>.admonition-title::before, -.md-typeset .error>summary::before { + +.md-typeset .danger > .admonition-title::before, +.md-typeset .danger > summary::before, +.md-typeset .error > .admonition-title::before, +.md-typeset .error > summary::before { -webkit-mask-image: var(--md-admonition-icon--xdanger); mask-image: var(--md-admonition-icon--xdanger); } -.md-typeset .note>.admonition-title::before, -.md-typeset .note>summary::before { + +.md-typeset .note > .admonition-title::before, +.md-typeset .note > summary::before { -webkit-mask-image: var(--md-admonition-icon--xnote); mask-image: var(--md-admonition-icon--xnote); } -.md-typeset .abstract>.admonition-title::before, -.md-typeset .abstract>summary::before, -.md-typeset .summary>.admonition-title::before, -.md-typeset .summary>summary::before, -.md-typeset .tldr>.admonition-title::before, -.md-typeset .tldr>summary::before { + +.md-typeset .abstract > .admonition-title::before, +.md-typeset .abstract > summary::before, +.md-typeset .summary > .admonition-title::before, +.md-typeset .summary > summary::before, +.md-typeset .tldr > .admonition-title::before, +.md-typeset .tldr > summary::before { -webkit-mask-image: var(--md-admonition-icon--xabstract); mask-image: var(--md-admonition-icon--xabstract); } -.md-typeset .faq>.admonition-title::before, -.md-typeset .faq>summary::before, -.md-typeset .help>.admonition-title::before, -.md-typeset .help>summary::before, -.md-typeset .question>.admonition-title::before, -.md-typeset .question>summary::before { + +.md-typeset .faq > .admonition-title::before, +.md-typeset .faq > summary::before, +.md-typeset .help > .admonition-title::before, +.md-typeset .help > summary::before, +.md-typeset .question > .admonition-title::before, +.md-typeset .question > summary::before { -webkit-mask-image: var(--md-admonition-icon--xquestion); mask-image: var(--md-admonition-icon--xquestion); } -.md-typeset .check>.admonition-title::before, -.md-typeset .check>summary::before, -.md-typeset .done>.admonition-title::before, -.md-typeset .done>summary::before, -.md-typeset .success>.admonition-title::before, -.md-typeset .success>summary::before { + +.md-typeset .check > .admonition-title::before, +.md-typeset .check > summary::before, +.md-typeset .done > .admonition-title::before, +.md-typeset .done > summary::before, +.md-typeset .success > .admonition-title::before, +.md-typeset .success > summary::before { -webkit-mask-image: var(--md-admonition-icon--xsuccess); mask-image: var(--md-admonition-icon--xsuccess); } -.md-typeset .fail>.admonition-title::before, -.md-typeset .fail>summary::before, -.md-typeset .failure>.admonition-title::before, -.md-typeset .failure>summary::before, -.md-typeset .missing>.admonition-title::before, -.md-typeset .missing>summary::before { + +.md-typeset .fail > .admonition-title::before, +.md-typeset .fail > summary::before, +.md-typeset .failure > .admonition-title::before, +.md-typeset .failure > summary::before, +.md-typeset .missing > .admonition-title::before, +.md-typeset .missing > summary::before { -webkit-mask-image: var(--md-admonition-icon--xfail); mask-image: var(--md-admonition-icon--xfail); } -.md-typeset .bug>.admonition-title::before, -.md-typeset .bug>summary::before { + +.md-typeset .bug > .admonition-title::before, +.md-typeset .bug > summary::before { -webkit-mask-image: var(--md-admonition-icon--xbug); mask-image: var(--md-admonition-icon--xbug); } -.md-typeset .example>.admonition-title::before, -.md-typeset .example>summary::before { + +.md-typeset .example > .admonition-title::before, +.md-typeset .example > summary::before { -webkit-mask-image: var(--md-admonition-icon--xexample); mask-image: var(--md-admonition-icon--xexample); } -.md-typeset .cite>.admonition-title::before, -.md-typeset .cite>summary::before, -.md-typeset .quote>.admonition-title::before, -.md-typeset .quote>summary::before { + +.md-typeset .cite > .admonition-title::before, +.md-typeset .cite > summary::before, +.md-typeset .quote > .admonition-title::before, +.md-typeset .quote > summary::before { -webkit-mask-image: var(--md-admonition-icon--xquote); mask-image: var(--md-admonition-icon--xquote); } - -td { - vertical-align: middle !important; - text-align: center !important; -} -th { - font-weight: bold !important; - text-align: center !important; -} .md-nav__item--active > .md-nav__link { font-weight: bold; } + .center { display: block; margin-left: auto; margin-right: auto; width: 80%; } + +/* Handles Gitter Sidecard UI */ +.gitter-open-chat-button { + background-color: var(--md-primary-fg-color) !important; + font-family: inherit !important; + font-size: 12px; +} + .center-small { display: block; margin-left: auto; margin-right: auto; width: 90%; } + .md-tabs__link--active { font-weight: bold; } + .md-nav__title { font-size: 1rem !important; } + .md-version__link { overflow: hidden; } + .md-version__current { + text-transform: uppercase; font-weight: bolder; } + .md-typeset .task-list-control .task-list-indicator::before { - background-color: #FF0000; - -webkit-mask-image: var(--md-admonition-icon--failure); - mask-image: var(--md-admonition-icon--failure); + background-color: #ff0000; + -webkit-mask-image: var(--md-admonition-icon--failure); + mask-image: var(--md-admonition-icon--failure); } + blockquote { padding: 0.5em 10px; quotes: "\201C""\201D""\2018""\2019"; } + blockquote:before { color: #ccc; content: open-quote; @@ -208,10 +257,12 @@ blockquote:before { margin-right: 0.25em; vertical-align: -0.4em; } + blockquote:after { visibility: hidden; content: close-quote; } + blockquote p { display: inline; } @@ -224,6 +275,7 @@ blockquote p { display: flex; justify-content: center; } + .embed-responsive { position: relative; display: block; @@ -231,10 +283,12 @@ blockquote p { padding: 0; overflow: hidden; } + .embed-responsive::before { display: block; content: ""; } + .embed-responsive .embed-responsive-item, .embed-responsive iframe, .embed-responsive embed, @@ -248,15 +302,19 @@ blockquote p { height: 100%; border: 0; } + .embed-responsive-21by9::before { padding-top: 42.857143%; } + .embed-responsive-16by9::before { padding-top: 56.25%; } + .embed-responsive-4by3::before { padding-top: 75%; } + .embed-responsive-1by1::before { padding-top: 100%; } @@ -265,6 +323,7 @@ blockquote p { footer.sponsorship { text-align: center; } + footer.sponsorship hr { display: inline-block; width: 2rem; @@ -272,15 +331,19 @@ footer.sponsorship hr { vertical-align: middle; border-bottom: 2px solid var(--md-default-fg-color--lighter); } + footer.sponsorship:hover hr { border-color: var(--md-accent-fg-color); } + footer.sponsorship:not(:hover) .twemoji.heart-throb-hover svg { color: var(--md-default-fg-color--lighter) !important; } + .doc-heading { padding-top: 50px; } + .btn { z-index: 1; overflow: hidden; @@ -295,10 +358,12 @@ footer.sponsorship:not(:hover) .twemoji.heart-throb-hover svg { font-weight: bold; margin: 5px 0px; } + .btn.bcolor { border: 4px solid var(--md-typeset-a-color); color: var(--blue); } + .btn.bcolor:before { content: ""; position: absolute; @@ -310,53 +375,68 @@ footer.sponsorship:not(:hover) .twemoji.heart-throb-hover svg { z-index: -1; transition: 0.2s ease; } + .btn.bcolor:hover { color: var(--white); background: var(--md-typeset-a-color); transition: 0.2s ease; } + .btn.bcolor:hover:before { width: 100%; } + main #g6219 { transform-origin: 85px 4px; - animation: an1 12s .5s infinite ease-out; + animation: an1 12s 0.5s infinite ease-out; } + @keyframes an1 { 0% { transform: rotate(0); } + 5% { transform: rotate(3deg); } + 15% { transform: rotate(-2.5deg); } + 25% { transform: rotate(2deg); } + 35% { transform: rotate(-1.5deg); } + 45% { transform: rotate(1deg); } + 55% { transform: rotate(-1.5deg); } + 65% { transform: rotate(2deg); } + 75% { transform: rotate(-2deg); } + 85% { transform: rotate(2.5deg); } + 95% { transform: rotate(-3deg); } + 100% { transform: rotate(0); } -} +} \ No newline at end of file diff --git a/docs/overrides/main.html b/docs/overrides/main.html index 969e3c710..1c8d80468 100644 --- a/docs/overrides/main.html +++ b/docs/overrides/main.html @@ -27,6 +27,7 @@ + {% endblock %}