Skip to main content


About Q&A, please follow rules


Here are some common questions. If you can't find your question, please search in this Issue first. If you are sure it is a bug and it has not been submitted before, please submit an Issue according to the requirements.

Note: This is FAQ about SRS, please see Oryx FAQ for Oryx.


  • Questions about RTMP/HTTP-FLV/WebRTC live streaming?
    1. SRS only supports streaming protocols, such as live streaming or WebRTC. For details, please refer to the cluster section in the WiKi.
  • Questions about HLS/DASH segmented live streaming, or on-demand/recording/VoD/DVR?
    1. SRS can record as on-demand files. Please refer to DVR
    2. SRS can generate HLS or DASH. Please refer to HLS
  • Questions about HLS/DASH/VoD/DVR distribution clusters?
    1. These are all HTTP files, and for HTTP file distribution clusters, it is recommended to use NGINX. Please refer to HLS Cluster
    2. You can use NGINX in conjunction with SRS Edge to distribute HTTP-FLV, implementing the distribution of all HTTP protocols. Please refer to Nginx For HLS
  • SRS source cluster, multi-stream hot backup, stream switching, push stream disaster recovery, questions about live stream disaster recovery and switching, refer to link.
  • How can you build a server network to provide nearby services and expand server capacity? You can use the SRS Edge cluster as a solution. For more information, refer to this link.
  • How to create multi-stream backup and switch between them: Use multiple streams and select one that is available. For stream disaster recovery and switching, refer to this link.


  • Pagination: For pagination issues related to console streams and clients, refer to #3451
    1. The default API parameters are start=0, count=10, and the Console does not support pagination. It is planned to be supported in the new Console.


  • CORS: How to setup cross-domain access for HTTP APIs or streams
    1. SRS 3.0 supports cross-domain (CORS) access, so there is no need for additional HTTP proxies, as it is built-in and enabled by default. Please refer to #717 #798 #1002
    2. Of course, using an Nginx proxy server can also solve cross-domain issues, so there is no need to set it in SRS. Note that you only need to proxy the API, not the media stream, because the bandwidth consumption of the stream is too high, which will cause the proxy to crash and is not necessary.
    3. Use Nginx or Caddy proxy to provide a unified HTTP/HTTPS service. Please refer to #2881

CPU and OS

  • CPU and OS: What's the CPU architecture and OS operating system supported by SRS
    1. SRS supports common CPU architectures, such as x86_64 or amd64, as well as armv7/aarch64/AppleM1, MIPS or RISCV, and Loongson loongarch. For other CPU adaptations, please refer to ST#22.
    2. SRS supports commonly used operating systems, such as Linux including CentOS and Ubuntu, macOS, and Windows.
    3. SRS also supports domestic Xin Chuang systems. If you need to adapt to a new domestic Xin Chuang system, you can submit an issue.
  • Windows: Special notes about Windows
    1. Generally, Windows is less used as a server, but there are some application scenarios. SRS 5.0 currently supports Windows, and each version will have a Windows installation package for download.
    2. Since it is difficult for everyone to download from Github, we provide a Gitee mirror download. Please see Gitee: Releases for each version's attachments.
    3. There are still some issues on the Windows platform that have not been resolved, and we will continue to improve support. For details, please refer to #2532.


  • Dynamic DVR: How to do dynamic recording, regular expression matching for streams that need to be recorded, etc.
    1. You can use on_publish to callback the business system and implement complex rules.
    2. For specific recording files, use on_hls to copy the slices to the recording directory or cloud storage.
    3. You can refer to the DVR implementation in oryx.
    4. SRS will not support dynamic DVR, but some solutions are provided. You can also refer to #1577.
  • Why does recording WebRTC as MP4 fail in SRS? Refer to this link for more information.


  • Edge HLS/DVR/RTC: Does Edge Cluster support for HLS/DVR/RTC, etc.
    1. Edge is a live streaming cluster that only supports live streaming protocols such as RTMP and FLV. Only the origin server can support HLS/DVR/RTC. Refer to #1066
    2. Currently, there is no restriction on using HLS/DVR/RTC capabilities in Edge, but they will be disabled in the future. So please do not use them this way, and they won't work.
    3. For the HLS cluster, please refer to the documentation HLS Edge Cluster
    4. The development of WebRTC and SRT clustering capabilities is in progress. Refer to #3138


  • FFmpeg: Questions related to FFmpeg
    1. If FFmpeg is not found, the error terminate, please restart it appears, compilation fails with No FFmpeg found, or FFmpeg does not support h.265 or other codecs, you need to compile or download FFmpeg yourself and place it in the specified path, then SRS will detect it. Please refer to #1523
    2. If you have questions about using FFmpeg, please do not submit issues in SRS. Instead, go to the FFmpeg community. Issues about FFmpeg in SRS will be deleted directly. Don't be lazy.


  • About supported features, outdated features, and plans?
    1. Each version supports different features, which are listed on the Github homepage, such as develop/5.0, release/4.0, release/3.0.
    2. The changes in each version are also different and are listed on the Github homepage, such as develop/5.0, release/4.0, release/3.0.
    3. In addition to adding new features, SRS will also remove unsuitable features, such as RTSP push streaming, srs-librtmp, GB SIP signaling, etc. These features may be useless, inappropriate, or provided in a more suitable way. See #1535 for more information.


  • GB28181: What about GB28181 status and roadmap
    1. GB has been moved to a separate repository srs-gb28181, please refer to #2845
    2. For GB usage, please refer to #1500. Currently, GB is still in the feature/gb28181 branch. It will be merged into develop and then released after it is stable. It is expected to be released in SRS 5.0.
    3. SRS support for GB will not be comprehensive, and will only be used as an access protocol. The highly concerned intercom is planned to be supported.


  • No one answers questions in the WeChat group? The art of asking questions in the community?
    1. Please search in the various documents of the community first, and do not ask questions that already have answers.
    2. Please describe the background of the problem in detail, and show the efforts you have made.
    3. Open source community means you need to be able to solve problems yourself. If not, please consider paid consultation.


  • RTMP for HEVC: Does RTMP support HEVC.
    1. How to support RTMP FLV HEVC streaming, refer to the link.

HLS Fragments

  • HLS Fragment Duration: How to setup HLS segment duration
    1. HLS segment duration is determined by three factors: GOP length, whether to wait for a keyframe (hls_wait_keyframe), and segment duration (hls_fragment).
    2. For example, if the GOP is set to 2s, the segment length is hls_fragment:5, and hls_wait_keyframe:on, then the actual duration of each TS segment may be around 5~6 seconds, as it needs to wait for a complete GOP before closing the segment.
    3. For example, if the GOP is set to 10s, the segment length is hls_fragment:5, and hls_wait_keyframe:on, then the actual duration of each TS segment is also over 10 seconds.
    4. For example, if the GOP is set to 10s, the segment length is hls_fragment:5, and hls_wait_keyframe:off, then the actual duration of each TS segment is around 5 seconds. The segment does not start with a keyframe, so some players may experience screen artifacts or slower video playback.
    5. For example, if the GOP is set to 2s, the segment length is hls_fragment:2, and hls_wait_keyframe:on, then the actual duration of each TS segment may be around 2 seconds. This way, the HLS delay is relatively low, and there will be no screen artifacts or decoding issues, but the encoding quality may be slightly compromised due to the smaller GOP.
    6. Although the segment size can be set to less than 1 second, such as hls_fragment:0.5, the #EXT-X-TARGETDURATION is still 1 second because it is an integer. Moreover, having too small segments can lead to an excessive number of segments, which is not conducive to CDN caching or player caching, so it is not recommended to set too small segments.
    7. If you want to reduce latency, do not set the segment duration to less than 1 second; setting it to 1 or 2 seconds is more appropriate. Because even if it is set to 1 second, due to the player's segment fetching strategy and caching policy, the latency will not be the same as RTMP or HTTP-FLV streams. The minimum latency for HLS is generally over 5 seconds.
    8. GOP refers to the number of frames between two keyframes, which needs to be set in the encoder. For example, the FFmpeg parameter -r 25 -g 50 sets the frame rate to 25fps and the GOP to 50 frames, which is equivalent to 2 seconds.
    9. In OBS, there is a Keyframe Interval(0=auto) setting. Its minimum value is 1s. If set to 0, it actually means automatic, not the lowest latency setting. For low latency, it is recommended to set it to 1s or 2s.


  • HTTP RAW API: Why removed RAW API, dynamic recording DVR, etc.

    1. Due to various problems with the RAW API, it may lead to overuse. The feature has been removed in version 4.0. For detailed reasons, please see #2653.
    2. Again, do not use HTTP RAW API for business implementation. This is what your business system should do. You can use Go or Node.js to handle it.
  • Secure HTTP API: How to do API authentication, API security, etc.

    1. Regarding HTTP API authentication and how to prevent everyone from accessing it, it is currently recommended to use Nginx proxy to solve this issue. The support will be enhanced in the future. For details, please see #1657.
    2. You can also use HTTP Callback to implement authentication. When pushing or playing a stream, call your business system's API to implement the hook.
  • HTTP Callback: How to do HTTP callback and authentication.

    1. SRS uses HTTP callback for authentication. To learn how to return error codes in HTTP Callback and Response, please refer to this link.


  • HTTPS: How to use HTTPS services, API, Callback, Streaming, WebRTC, etc.
    1. HTTPS API provides transport layer security for the API. WebRTC push streaming requires HTTPS pages, which can only access HTTPS APIs.
    2. HTTPS Callback calls back to HTTPS services. If your server uses the HTTPS protocol, most business systems use HTTPS for security purposes.
    3. HTTPS Live Streaming provides transport layer security for streaming, mainly because HTTPS pages can only access HTTPS resources.
    4. Automatically apply for SSL certificates from letsencrypt for a single domain, making it easier for small and medium-sized enterprises to deploy SRS and avoiding the high overhead of HTTPS proxies for streaming media businesses. See #2864
    5. Use Nginx or Caddy as reverse proxies for HTTP/HTTPS Proxy to provide unified HTTP/HTTPS services. See #2881
  • HTTP2: How to do HTTP2-FLV or HTTP2 HLS, etc.
    1. SRS will not implement HTTP2 or HTTP3, but instead recommends using reverse proxies to convert protocols, such as Nginx or Go.
    2. Since HTTP is a very mature protocol, existing tools and reverse proxy capabilities are very comprehensive, and SRS does not need to implement a complete protocol.
    3. SRS has implemented a simple HTTP 1.0 protocol, mainly providing API and Callback capabilities.


  • Latency: How to reduce latency, how to do low-latency live streaming, and how much latency WebRTC has.
    1. Live streaming latency is generally 1 to 3 seconds, WebRTC latency is around 100ms, why is the latency of the self-built environment so high?
    2. The most common reason for high latency is using the VLC player, which has a latency of tens of seconds. Please switch to the SRS H5 player.
    3. Latency is related to each link, not just SRS reducing latency. It is also related to the push tool (FFmpeg/OBS) and the player. Please refer to Realtime and follow the steps to set up a low-latency environment. Don't start with your own fancy operations, just follow the documentation.
    4. If you still find high latency after following the steps, how to troubleshoot? Please refer to #2742
  • HLS Latency: How to reduce the latency of HLS.
    1. HLS has a large delay, and it takes a long time to watch after switching content. How to reduce HLS latency? Refer to the link.
    2. How to config SRS for HLS Latency
  • Benchmark: How to benchmark and testing latency.
    1. How to measure and optimize live streaming latency, latency in different stages and protocols, how to improve and measure latency, refer to this link.

Performance and Memory

  • Performance: How to do performance optimization, concurrency, stress testing, and memory leaks
    1. Performance is a comprehensive topic, including the quality of the project, the capacity and concurrency it supports, how to optimize performance, and even memory issues, such as memory leaks (leading to reduced performance), out-of-bounds and wild pointer problems.
    2. If you need to understand the concurrency of SRS, you must divide it into separate concurrency for live streaming and WebRTC. Live streaming can use srs-bench, and WebRTC can use the feature/rtc branch for stress testing to obtain the concurrency supported by your hardware and software environment under specific bitrates, latency, and business characteristics.
    3. SRS also provides official concurrency data, which can be found in Performance. It also explains how to measure this concurrency, the conditions under which the data is obtained, and specific optimization code.
    4. If you need to investigate performance issues, memory leaks, or wild pointer problems, you must use system-related tools such as perf, valgrind, or gperftools. For more information, please refer to SRS Performance (CPU), Memory Optimization Tool Usage or Perf.
    5. It is important to note that valgrind has been supported since SRS 3.0 (inclusive), and the ST patch has been applied.


  • Player: How to choose players and OS platforms.
    1. How to choose a live streaming player, as well as the introduction of corresponding protocols and latency, recommend RTMP for playing HTTP-FLV/HLS/WebRTC: refer to the link
    2. How to play HTTP-FLV with HTML5, MSE compatibility, HTML5 players on various platforms, and how to use WASM to play FLV on iOS: refer to the link


  • RTSP: How to support RTSP streaming, RTSP server, RTSP playback, etc.
    1. SRS supports pulling RTSP with Ingest, but does not support pushing RTSP stream to SRS, which is not the correct usage. For detailed reasons, please refer to #2304.
    2. Of course, RTSP server and RTSP playback will not be supported either, please refer to #476.
    3. If you need a large number of camera connections, such as 10,000, using FFmpeg may be more difficult. For such large-scale businesses, the recommended solution is to use ST+SRS code to implement an RTSP forwarding server.
  • Browser RTSP: How to play RTSP streams in a browser
    1. How to play RTSP streams in HTML5, using FFmpeg to pull RTSP streams, and how to reduce latency. Refer to this link.
    2. How to watch RTSP streams from IP cameras in a web browser. Refer to this link.
  • How can we use a single server to receive all IPC streams, convert internal network RTSP to public network live streaming or RTC? Refer to this link for more information.


  • Media Stream Server: What's the difference between media servers.
    1. How to do live streaming or calls, the differences and focus points between live streaming and RTC (Real-Time Communication), refer to this link.
    2. How to do live streaming between Android devices, including live streaming servers and players, and how to transfer video between two Android devices, refer to this link.
    3. Recommended media servers and protocol introductions, various protocols used in live streaming, refer to this link.
  • Raspberry Pi: How to run in Raspberry Pi.
    1. Remote control of Raspberry Pi camera and car, live streaming and pure WebRTC solution, refer to this link.
  • Others: Other solutions and common questions.
    1. Why do two RTMP streams gradually go out of sync, and how can SRT or WebRTC be used to keep two different streams synchronized? Refer to this link.
    2. How does the SRS origin cluster support HLS, and how are the sliced files distributed? Refer to this link.
    3. How can the SRS origin cluster be expanded, and how can MESH communication issues be resolved? Refer to this link.
    4. Record video using WebRTC and use SRS to convert WebRTC to RTMP for recording. Refer to this link.
    5. The differences between RTSP and RTP, and between RTSP and WebRTC. Refer to this link.
    6. The meaning of SRS log abbreviations and connection-based logs. Refer to this link.
    7. Why FPS is not accurate, the meaning of TBN, and conversion errors. Refer to this link.
    8. What is RTMP's tcURL, and how to get the stream address? Refer to this link.
    9. How to play RTMP streams in H5 without using Flash and Nginx? Refer to this link.
    10. Can WebRTC replace RTMP, and is live streaming only possible with WebRTC? Refer to this link.
    11. How to do a video live stream through a VPS? Refer to this link.

Source Cleanup

  • Source Cleanup: How to fix memory growth for a large number of streams
    1. The Source object for push streaming is not cleaned up, and memory will increase as the number of push streams increases. For now, you can use Gracefully Quit as a workaround, and this issue will be addressed in the future. See #413
    2. To reiterate, you can use Gracefully Quit as a workaround. Even if this issue is resolved in the future, this solution is the most reliable and optimal one. Restarting is always a good option.


  • Why doesn't SRS support multi-threading, and how can you scale your SRS? Refer to this link for more information.

Video Guides

Here is the video material for the Q&A session, which provides a detailed explanation of a certain topic. If your question is similar, please watch the video directly:

WebRTC Cluster

  • WebRTC+Cluster: Does SRS support WebRTC clustering?
    1. WebRTC clustering is not the same as live streaming clustering (Edge+Origin Cluster), but it is called WebRTC cascading. Please refer to #2091
    2. In addition to the clustering solution, SRS will also support the Proxy solution, which is simpler than clustering and will have scalability and disaster recovery capabilities. Please refer to #3138

WebRTC Live

  • WebRTC+Live: How to convert Live stream with WebRTC.
    1. For the conversion between WebRTC and RTMP, such as RTMP2RTC (RTMP push stream RTC playback) or RTC2RTMP (RTC push stream RTMP playback), you must specify the conversion configuration. Audio transcoding is not enabled by default to avoid significant performance loss. Please refer to #2728
    2. If SRS 4.0.174 or earlier works, but it does not work after updating, it is because rtc.conf does not enable RTMP to RTC by default. You need to use rtmp2rtc.conf or rtc2rtmp.conf. Please refer to 71ed6e5dc51df06eaa90637992731a7e75eabcd7
    3. In the future, the conversion between RTC and RTMP will not be enabled automatically, because SRS must consider the independent RTMP and independent RTC scenarios. The conversion scenario is just one of them, but due to the serious performance problems caused by the conversion scenario, it cannot be enabled by default, which will cause major problems in independent scenarios.
  • How can WebRTC support one-to-many broadcasting and accommodate a large number of streaming clients? For WebRTC to be used in live streaming, you can refer to this link.
  • How to achieve low-latency live streaming with FFmpeg and HTML5, using Raspberry Pi as a streaming device for remote assistance in medical equipment. For more information, refer to this link.


  • WebRTC: Questions about WebRTC push and pull streams or conferences

    1. WebRTC is much more complicated than live streaming. For many WebRTC issues, do not submit issues in SRS, but search for the problem on Google first. If you do not have this ability, do not use WebRTC. There are many pitfalls, and if you do not have the ability to crawl out of them, do not jump into them.
    2. A common issue is that the Candidate setting is incorrect, causing the push and pull streams to fail. For details, see the WebRTC usage instructions: #307
    3. There are also issues with UDP ports being inaccessible, which may be due to firewall settings or network issues. Please use tools to test, refer to #2843
    4. Another common issue is the conversion between RTMP and WebRTC. Please see the description above #webrtc-live.
    5. Then there are WebRTC permission issues, such as being able to push streams locally but not on the public network. This is a Chrome security setting issue. Please refer to #2762
    6. There are also less common issues, such as not being able to play non-HTTPS SRS streams with the official player. This is also a Chrome security policy issue. Please refer to #2787
    7. When mapping ports in docker, if you change the port, you need to modify the configuration file or specify it through eip. Please refer to #2907
  • WebRTC RTMP: Questions related to WebRTC and live streaming.

    1. For WebRTC to RTMP conversion, using WebRTC for live streaming, HTML5 push streaming, or low-latency live streaming, refer to this link.
    2. For RTMP to WebRTC conversion, low-latency live streaming solutions, HTTP-TS, and HEVC live streaming, refer to this link.
    3. To learn how to use WebRTC to push streams to YouTube, while also recording and watching streams with WebRTC, refer to this link.
  • What are the roles and application scenarios of WebRTC's SFU (Selective Forwarding Unit), and how do different SFUs compare in functionality? For more information, refer to this link.


  • WebSocket/WS: How to support WS-FLV or WS-TS?
    1. You can use a Go proxy to convert it once, with a few lines of key code for stability and reliability. Please refer to mse.go


WebRTC Demo Failed

Question Failed to join RTC room or start conversation

According to the 5.0 documentation for SFU: One to One, I have completed the following configurations:

  1. Configured the CANDIDATE to use the internal IP address
  2. Used Docker to start RTC service, Signaling service, and HTTPS service.
  3. Successfully accessed and was able to open it without any issues.

However, when I click on "Start Conversation" or "Join Room," my computer's camera briefly lights up but there is no response. I have already used a self-signed OpenSSL key and crt certificate, but encountered a TLS certificate handshake error.


  1. First, it is important to clarify that you strictly followed the documentation.SFU: One to One
  2. In order to identify the cause, you can investigate potential factors such as certificate problems, HTTPS connection issues, and browser permission settings etc.


Refer this FAQ by:

See FAQ:
* Chinese:
* English:

Duplicate or pre-existing issues may be removed, as they are already present in the issues or FAQ section:

For discussion or idea, please ask in [discord](

This issue will be eliminated, see #2716
Please ask this question on Stack Overflow using the [#simple-realtime-server tag](

If want some discussion, here's the [discord](

This issue will be eliminated, see #2716