The Static Mage

← Back to Home

How We Built a Bulletproof Re-Streaming Server for Path to Hope

2026-05-02

A technical deep dive into mediamtx, ffmpeg, and the kind of paranoid infrastructure you build after a server goes AWOL.

If you have not read the story of how we kept the Path to Hope stream online for 14 days straight, you should probably start there. This post is the first in a series of technical companion pieces: the one where we explain the re-streaming layer that sat between our chaos and the internet, quietly making sure that Twitch and YouTube never saw the full brunt of any infrastructure meltdowns.

Why We Needed a Re-Stream Server

At a high level, the pipeline looked like this: cameras and audio originated in the truck, then fed into an OBS instance where overlays and alerts were mixed in. We needed a reliable way to get the final stream from OBS to Twitch and YouTube.

The problem was simple: if OBS was streaming directly to Twitch and YouTube and it went down for any reason, those viewer streams would die. Full stop. No buffer, no grace period, just a hard disconnect and a bunch of confused viewers. We already experienced network drops running OBS on the budget cloud host during the week before go-live, so we chose to build in an additional protection layer, and we are glad we did.

When Path to Hope went live, we had already survived a data center that could not scale and launched the stream from a storm shelter during a tornado warning. By the end of the trip, the primary OBS host (the machine running OBS, mixing audio, and generating overlays) had moved from that failed budget cloud host to a residential PC on an office floor, then to another cloud server that failed us in a different way, and finally back to the residential PC. We loved that PC, but we did not fully trust it for a fourteen-day stream with hundreds of viewers.

Choosing DigitalOcean

We were in the process of being burned by a budget host, and AWS was going to charge us approximately one kidney for outbound bandwidth for a multi-week stream. At roughly 2.7 GB per hour, multiplied by 336 hours, multiplied by 2 streaming platforms, multiplied by $0.09 per GB, we were looking at about $163.30 in bandwidth charges alone, before compute, storage, or other AWS fees. We needed something in the middle: a balance between affordability and an operator that would not move our production server to another data center without telling us.

We chose DigitalOcean for the re-stream layer. I have been a customer for years. I know people who have worked there and who work there now, and they are genuinely fantastic engineers. More importantly, DigitalOcean communicates. If there is even a minimal chance of a maintenance outage, they notify customers weeks in advance. After our other hosting experience with this stream, that level of transparency felt like a luxury.

DigitalOcean is more expensive than the budget host we had used before, but it is still cheaper than AWS for our use case, especially because their virtual servers (called droplets) include a generous amount of outbound bandwidth in the base price. No surprise bills. No calculator-required pricing pages. Just a flat monthly rate and a predictable bill. The droplet we chose included 3 TB per month of outbound data transfer, and we would be well under the limit. DigitalOcean also pools bandwidth across droplets on your account, and since I already had other systems running there, my aggregate limit was higher than 3 TB per month.

The Software Stack

We kept the software stack deliberately minimal. The re-stream server had two jobs: ingest a single video stream from OBS and push copies of that stream out to Twitch and YouTube. For that, we needed only two tools:

Neither tool was doing anything particularly demanding. mediamtx was receiving a single RTMP stream and serving it back out. ffmpeg was copying the video and audio codecs directly rather than transcoding, which is about as lightweight as video processing gets. No re-encoding, no resizing, no filters. Just a straight passthrough.

Server Specs and Cost

Because the workload was so light, we did not need much in the way of resources. We went with a DigitalOcean s-2vcpu-2gb droplet for the re-stream server, offering:

If anything, this modest droplet was overkill. During the trip, CPU usage hovered in the low single digits. Memory usage was similarly modest. Even with 2 constant outbound 6 Mbps streams running around the clock for two weeks, we used less than 2 TB of our included bandwidth.

How It Worked: The Data Flow

The Ingest Endpoint

mediamtx was configured with a single RTMP ingest endpoint. In a perfect world, we would have locked this down with proper authentication. We tried. Time was tight, and after a few failed attempts we did what Twitch and YouTube themselves do: we created a difficult-to-guess endpoint path and called it good enough. If a stream key is good enough for the platforms themselves, we do not feel too bad about using the same approach.

In our configuration, the ingest path was something like input-xxxxxxxxxxxxxxxxx, where xxxxxxxxxxxxxxxxx is a string of random characters. Not elegant, not ideal, but for our threat model, brute-forcing a 19-character random RTMP path was highly unlikely in practice. It bought us the security we needed without the configuration overhead we did not have time for.

The Always-Available Stream

One of the most important configurations in mediamtx was setting the stream to always available. This meant that even if the incoming feed from OBS was interrupted, mediamtx would continue to serve the endpoint. Twitch and YouTube would not see a disconnect. Viewers downstream would see a "STREAM IS OFFLINE" placeholder, but the connection itself would not drop.

This distinction mattered enormously. A disconnect on Twitch or YouTube means the stream session ends and viewers have to refresh. An always-available stream means the platforms show a placeholder screen for a few moments but keep the existing session alive. The difference between an admittedly bland placeholder screen and a full stream restart is the difference between "nobody noticed" and "what happened to the stream?"

ffmpeg Restreams

Two separate ffmpeg processes ran on the server, each reading from the same mediamtx output and pushing to a different platform.

The first ffmpeg read from mediamtx and forwarded to Twitch:

ffmpeg -i rtmp://localhost:1935/input-xxxxxxxxxxxxxxxxx \
  -c:v copy -c:a copy \
  -f flv rtmp://live.twitch.tv/app/<stream_key>

The second ffmpeg did the same for YouTube:

ffmpeg -i rtmp://localhost:1935/input-xxxxxxxxxxxxxxxxx \
  -c:v copy -c:a copy \
  -f flv rtmp://a.rtmp.youtube.com/live2/<stream_key>

The -c:v copy -c:a copy flags are the key here: they tell ffmpeg to copy the video and audio codecs directly without re-encoding. This keeps CPU usage minimal and avoids any quality loss from a second encoding pass.

Stream Keys

Each platform requires a stream key, which acts as a password for your broadcast.

For Twitch, you can find your stream key in Twitch Creator Dashboard → Settings → Stream. Look for "Primary Stream Key." Keep it secret. Anyone with this key can stream to your channel.

For YouTube, the process is slightly different. In our workflow, we scheduled a live stream in advance via YouTube Studio. Once the stream was created, YouTube provided a stream URL and stream key that we used in ffmpeg.

Configuration Files

For those who want to build something similar, here are the actual configurations we used, with secrets rotated and keys anonymized.

mediamtx Configuration

The full mediamtx configuration file is verbose, but the important parts are the RTMP server enablement, the ingest path restriction, and the always-available stream definition. The rest of the settings are mostly defaults or irrelevant to our use case, but I have included the full file for completeness.

###############################################
# Global settings -> General

logLevel: info
logDestinations: [file]
logStructured: no
logFile: /var/log/mediamtx/mediamtx-restream.log
sysLogPrefix: mediamtx-restream

readTimeout: 10s
writeTimeout: 10s
writeQueueSize: 512
udpMaxPayloadSize: 1472
udpReadBufferSize: 0

runOnConnect:
runOnConnectRestart: no
runOnDisconnect:

###############################################
# Global settings -> Authentication

authMethod: internal

authInternalUsers:
- user: any
  pass:
  ips: ['127.0.0.1', '::1']
  permissions:
  - action: read
    path: input-xxxxxxxxxxxxxxxxx

- user: any
  pass:
  ips: []
  permissions:
  - action: publish
    path: input-xxxxxxxxxxxxxxxxx

authHTTPAddress:
authHTTPExclude:
- action: api
- action: metrics
- action: pprof

authJWTJWKS:
authJWTJWKSFingerprint:
authJWTClaimKey: mediamtx_permissions
authJWTExclude: []
authJWTInHTTPQuery: true

###############################################
# Global settings -> Control API

api: no
apiAddress: :9997
apiAllowOrigins: ['*']
apiTrustedProxies: []

###############################################
# Global settings -> Metrics

metrics: no
metricsAddress: :9998

###############################################
# Global settings -> Playback

playback: no

###############################################
# Global settings -> RTSP server

rtsp: no
rtspTransports: [udp, multicast, tcp]
rtspAddress: :8551
rtpAddress: :8000
rtcpAddress: :8001
multicastIPRange: 224.1.0.0/16
multicastRTPPort: 8002
multicastRTCPPort: 8003
rtspAuthMethods: [basic]

###############################################
# Global settings -> RTMP server

rtmp: yes
rtmpAddress: :1935

###############################################
# Global settings -> HLS server

hls: no

###############################################
# Global settings -> WebRTC server

webrtc: no
webrtcAddress: :8889
webrtcLocalUDPAddress: :8189
webrtcLocalTCPAddress: ''
webrtcIPsFromInterfaces: yes
webrtcIPsFromInterfacesList: []
webrtcAdditionalHosts: []
webrtcICEServers2: []
webrtcHandshakeTimeout: 10s
webrtcTrackGatherTimeout: 2s
webrtcSTUNGatherTimeout: 5s

###############################################
# Global settings -> SRT server

srt: no
srtAddress: :8892

###############################################
# Global settings -> Path defaults

pathDefaults:
  source: publisher
  sourceOnDemand:
  sourceOnDemandStartTimeout: 10s
  sourceOnDemandCloseAfter: 10s
  maxReaders: 0
  srtReadPassphrase:
  fallback:
  useAbsoluteTimestamp: false
  record: no
  recordPath: ./recordings/%path/%Y-%m-%d_%H-%M-%S
  recordFormat: fmp4
  recordPartDuration: 1s
  recordMaxPartSize: 50M
  recordSegmentDuration: 1h
  recordDeleteAfter: 1d
  overridePublisher: yes
  srtPublishPassphrase:

  ###############################################
  # Default path settings -> Hooks

  runOnInit:
  runOnInitRestart: no
  runOnDemand:
  runOnDemandRestart: no
  runOnDemandStartTimeout: 10s
  runOnDemandCloseAfter: 10s
  runOnUnDemand:
  runOnReady:
  runOnReadyRestart: no
  runOnNotReady:
  runOnRead:
  runOnReadRestart: no
  runOnUnread:
  runOnRecordSegmentCreate:
  runOnRecordSegmentComplete:

###############################################
# Path settings

paths:
  input-xxxxxxxxxxxxxxxxx:
    alwaysAvailable: true
    alwaysAvailableTracks:
      - codec: H264
      - codec: MPEG4Audio
        sampleRate: 48000
        channelCount: 2
  all_others:

systemd Service Files

Each component ran as a systemd service, which gave us automatic startup on boot and automatic restart if the process crashed or exited. The mediamtx server itself:

# /usr/local/lib/systemd/system/mediamtx-restream.service
[Unit]
Description=MediaMTX Re-Stream Server
After=local-fs.target network-online.target
Wants=network-online.target

[Service]
Type=simple
ExecStart=/usr/local/sbin/mediamtx /usr/local/etc/mediamtx-restream.yml
Restart=always
RestartSec=3

[Install]
WantedBy=multi-user.target

The Twitch restream service.

# /usr/local/lib/systemd/system/restream-twitch.service
[Unit]
Description=Re-stream -> Twitch

[Service]
Type=simple
ExecStart=ffmpeg -i rtmp://localhost:1935/input-xxxxxxxxxxxxxxxxx -c:v copy -c:a copy -f flv rtmp://ingest.global-contribute.live-video.net/app/<twitch_stream_key>
Restart=always
RestartSec=1

[Install]
WantedBy=default.target

And the YouTube restream service, similarly:

# /usr/local/lib/systemd/system/restream-youtube.service
[Unit]
Description=Re-stream -> YouTube

[Service]
Type=simple
ExecStart=/usr/local/sbin/ffmpeg -i rtmp://localhost:1935/input-xxxxxxxxxxxxxxxxx -c:v copy -c:a copy -f flv rtmp://a.rtmp.youtube.com/live2/<youtube_stream_key>
Restart=always
RestartSec=1

[Install]
WantedBy=default.target

systemd: The Unsung Hero

The Restart=always directive is doing the heavy lifting here. If ffmpeg exits for any reason (network blip, platform disconnect, cosmic ray), systemd waits a few seconds and starts it again. Every single one of those 48-hour Twitch resets was handled by this directive, with no human intervention required.

To start streaming to a platform, we simply started the corresponding service:

sudo systemctl start restream-twitch
sudo systemctl start restream-youtube

To stop:

sudo systemctl stop restream-twitch
sudo systemctl stop restream-youtube

Simple. Boring. Reliable.

One deliberate choice we made: the Twitch and YouTube restream units were disabled, not enabled. This meant that if the re-stream server rebooted itself, which never happened, but you plan for these things, the streams would not automatically go live. We wanted a human in the loop to decide when to start broadcasting. That was the right call for our use case, but your needs may vary. To disable a service so it does not start on boot:

sudo systemctl disable restream-twitch
sudo systemctl disable restream-youtube

The mediamtx server itself, on the other hand, was enabled so that it would be ready to ingest a stream as soon as the machine came up:

sudo systemctl enable mediamtx-restream

systemd Basics

If you are new to systemd, here are the commands we used most often. After creating or editing a service file, reload the systemd daemon so it picks up the changes:

sudo systemctl daemon-reload

To check whether a service is running:

sudo systemctl status restream-twitch

To view the logs for a specific service:

sudo journalctl -u restream-twitch.service

And to watch the logs in real time, which is how we monitored the Twitch restarts without having to sit there refreshing:

sudo journalctl -u restream-twitch.service -f

The -f flag works just like tail -f: it streams new log entries as they arrive. It is the easiest way to confirm that a service is behaving without repeatedly checking its status.

The Web Interface Nobody Used

Because we are who we are, we could not resist building a small web interface that let the streamers start and stop the restream services without needing SSH access. It was a clean little page with big friendly buttons and status indicators.

It was never used. Not once. The streamers were perfectly happy with us managing it remotely, and the interface sat there like a very well-engineered security blanket. We do not regret building it. But we are honest about the ROI.

What Actually Happened

The 48-Hour Twitch Resets

Twitch enforces a hard 48-hour limit on continuous streams. When the clock runs out, they terminate the connection. Over the course of the 14-day trip, this happened six times.

Each time, the ffmpeg process in the restream-twitch.service received an error from Twitch and exited. systemd immediately restarted it. Because this is a Twitch-enforced 48-hour limit, the Twitch stream session did reset each time; no re-stream server can prevent that hard cutoff. The key point is that recovery was automatic and happened within seconds. No human intervention. No midnight pages. No emergency SSH sessions.

This is the kind of automation that does not make for exciting war stories, but it is exactly the kind of automation that makes a 14-day stream possible.

The Night One Bandwidth Hog

On the first night of the trip, someone in the Mage household was downloading a large file. (We will not name names, but legend has it that she has four paws and a tendency to sit on keyboards.) The download saturated residential bandwidth enough to slow the OBS stream originating from our house. For about five seconds, the re-stream server had no incoming video. Twitch and YouTube viewers saw the "STREAM IS OFFLINE" placeholder, but the platform connections stayed up. When bandwidth cleared and OBS resumed sending, both platforms continued the same live session. No disconnect. No lost viewers.

Without the re-stream server, that five-second hiccup might have been a full stream restart.

The Cloud Server Network Drop-Outs

During the two days on the budget-host OBS server, we experienced two network dropouts. One lasted about 60 seconds. The other lasted about 30 seconds. We were not awake for either of them; we only knew they happened because the logs told us the next morning.

Twitch and YouTube did not drop. The re-stream server kept the platform connections alive, serving the offline placeholder until the primary OBS host came back. Viewers might have seen our spartan "STREAM IS OFFLINE" screen, but the live sessions themselves did not end. That is exactly what we built it for.

The Ghost Server

One disruption we did not anticipate came from a server we thought was dead. One of our old budget-host OBS servers had dropped offline, but when it eventually came back, it reconnected to the re-stream setup and tried to push video. Two simultaneous ingest streams created a conflict. The re-stream server received data from both the active primary OBS host and this ghost from the past, and the result was a brief interruption as those streams collided. And no, this was not a banshee from TheStaticBrock's Phasmophobia stream.

At that point, we set up something we probably should have used from the start: we activated DigitalOcean's cloud firewall on the re-stream droplet to block all inbound RTMP traffic except from our home IP address. This ensured that only our primary OBS host could send video to the ingest endpoint, regardless of what any resurrected zombie server or random internet attacker might attempt. Our home IP is technically dynamic, though it has not changed in several years. A truly robust setup would handle this more automatically and elegantly, perhaps with a VPN or dynamic DNS, but the firewall rule solved the immediate problem and gave us confidence that it would not happen again.

Server Cutovers

We cut over the primary OBS host twice during the trip: once from the residential PC to a budget-host data center OBS server, and again from that budget-host server back to the residential PC after experiencing the noisy neighbor. Each cutover involved changing where OBS was sending its stream, which meant a brief interruption in the feed reaching the re-stream server.

Both times, Twitch and YouTube stayed live. The re-stream server dutifully displayed the placeholder screen, and when the new primary OBS host came online, the feed resumed. The cutovers were seamless: viewers saw the offline screen briefly, but the Twitch and YouTube sessions did not end.

OBS Restarts

Later in the trip, we noticed that OBS was beginning to exhibit lag spikes during certain operations after running for over a week straight. We established a routine of restarting OBS every one to two days to keep it fresh. Each restart briefly interrupted the OBS stream, but once again the re-stream server made sure that the platforms never saw a disconnect. We did these restarts at night, when the stream was on its "AFK" scene after the streamers had gone to bed. If anybody besides us actually noticed, nobody was saying anything.

Lessons and Takeaways

A Single Point of Failure

We should be honest about something: this re-stream server was itself a single point of failure. If that one DigitalOcean droplet went down, or if DigitalOcean itself had an outage, both Twitch and YouTube streams would have gone down with it.

We trust DigitalOcean, but in the cloud, server failures are not unexpected. A proper setup for real high availability would use two redundant re-stream servers talking to each other in an active-standby configuration, with automatic failover if the primary fails. That extra reliability turns a simple setup into a complicated one. We did not have time for that, and the setup was good enough for our needs. But you should not build something mission-critical based on this blog post without addressing that gap.

Observability

mediamtx exposes metrics, and we have used those metrics on other projects. We did not set them up here, mainly because time was short and the stream was already live. While mediamtx does support hot reload of its configuration, if you break the configuration the process exits. We were not about to experiment while fourteen days of streaming were in progress.

Elsewhere in our stack, we had automatic Discord notifications baked into several components. We sent Discord webhooks when things started, stopped, or misbehaved. Again, due to time constraints, we did not instrument the re-stream server the same way, but would have if we had the time. Knowing exactly when the input stream went online or offline, and getting pinged the moment Twitch or YouTube ffmpeg processes restarted, would have removed the last bit of guesswork. The logs were enough, but metrics and notifications would have been better.

Simple Worked

The re-stream server was not flashy. It sat quietly in a DigitalOcean data center, ingesting a video stream from whichever OBS host was active and passing it along, and it did that one job with the kind of reliability that only boring infrastructure can provide. It did not mix audio, render overlays, or run chat bots.

The total cost was about $10 since we had it up for about half a month. CPU and memory usage were so low that we could probably have run it on a potato. The real value was architectural: by decoupling the public-facing stream from the chaos of our primary OBS host, we bought ourselves resilience. A five-second hiccup stayed a five-second hiccup, not a full restart. A network dropout became a log entry, not a viewer complaint.

Was It Worth It?

If you are planning a long-duration stream and you have any doubt about the stability of your primary OBS infrastructure, and honestly, even if you do not, a re-stream server is cheap insurance. mediamtx and ffmpeg are free. DigitalOcean is reliable and inexpensive. The peace of mind is worth every penny.

We would build it again, probably with the same tools. Though next time, we might actually get mediamtx authentication working. Or we might just make the random string a little longer and call it a day.