Measuring Vital Signs During Video Calls: rPPG in Zoom and Teams
Can you measure someone's heart rate through a Zoom or Teams video call? rPPG in video conferencing is technically possible but faces real compression, latency, and privacy hurdles.

Imagine joining a telehealth appointment and having your resting heart rate measured passively through your laptop camera — no finger clip, no wristband. This is the promise of rPPG in video conferencing, and it's closer to reality than most people realize.
But the path from "technically possible in a lab" to "clinically useful in a Zoom call" is littered with hard engineering problems. Video codecs, network jitter, background virtual effects, and privacy law all conspire to make this harder than it looks.
Why Video Conferencing Is a Natural rPPG Target
The telehealth boom accelerated during COVID-19 and hasn't fully retreated. Millions of clinical consultations now happen via video platform. In many of these, a clinician would benefit from basic vital signs — heart rate, respiratory rate, possibly SpO2 — without requiring the patient to use a separate device.
The camera is already there. The face is already visible. The video feed already exists. So why not extract the physiological signal from it?
This is exactly the question several research groups and startups have pursued. Companies like Binah.ai, AnyClip, and research spinouts from MIT and EPFL have all demonstrated rPPG systems that work from telehealth video feeds.
The Video Codec Problem
Here's the challenge that distinguishes real-world video conferencing rPPG from lab rPPG: you're not working with raw video.
Zoom, Teams, Google Meet, and virtually every video conferencing platform applies aggressive video compression before transmission. H.264 and H.265 codecs reduce bandwidth by exploiting temporal redundancy — exactly the small inter-frame variations that rPPG relies on.
A typical Zoom call at 720p uses 1-2 Mbps. An uncompressed 720p/30fps feed would be roughly 650 Mbps. That's a 300-500x compression ratio. The codec is specifically designed to throw away the small temporal changes that don't affect perceptual quality — which includes the 0.1-1% pixel intensity fluctuations from blood volume pulsation.
Verkruysse et al. (2008, DOI: 10.1364/OE.16.021434) first demonstrated rPPG from consumer cameras, but in uncompressed or minimally compressed settings. The codec problem became apparent as researchers tried to apply the same approach to real video call streams.
Does rPPG Survive H.264 Compression?
The short answer: partially, at higher bitrates, with significant accuracy loss at lower bitrates.
Mcduff et al. (2017) conducted a systematic analysis of rPPG under video compression at various quality levels (DOI: 10.1109/TAFFC.2017.2774047). At constant rate factor (CRF) values of 18-23 (high quality), rPPG heart rate RMSE approximately doubled compared to uncompressed. At CRF 30-35 (typical streaming quality), accuracy degraded by 3-5x.
The reason is that P-frames and B-frames in H.264 don't encode absolute pixel values — they encode motion vectors and residuals. The subtle color shifts from blood volume pulsation are in the residual signal, which is quantized away at lower bitrates.
Some rPPG approaches specifically designed for compressed video:
- Frequency-domain extraction: Working in the frequency domain (FFT of channel means over time) is more robust to quantization noise than time-domain methods
- Block DCT artifact awareness: Since H.264 uses 8x8 or 16x16 DCT blocks, rPPG features can be computed at the block level to avoid cross-block interpolation artifacts
- Bitrate adaptive selection: Reject measurements when estimated available bitrate drops below threshold
Network Jitter and Frame Drops
Beyond compression, real-time video calls experience packet loss and jitter. A dropped frame creates a gap in the PPG time series. A jittered frame (delayed and then played back in burst) creates timing errors that corrupt inter-beat interval measurements.
Unlike offline video analysis, where the signal is recorded and analyzed post-hoc, real-time rPPG in a video call must handle these artifacts in real time. This requires:
- Timestamp regularization to correct for jitter
- Missing frame interpolation (or exclusion windows)
- Confidence estimation that accounts for frame quality
The cumulative effect is that typical telehealth rPPG accuracy is roughly 3-5x worse than controlled lab conditions, based on published validation data.
Background Replacement and Virtual Environments
Modern video conferencing platforms include AI-based background replacement and blur effects. These features use segmentation models that track the person's outline and apply effects to the background.
The relevant concern for rPPG: does background segmentation affect the face region? In most current implementations, the face is preserved with high fidelity. However, hair at the face boundary, ear regions, and the neck (all potentially useful rPPG measurement sites) may be partially blurred or segmented as background.
Lighting normalization and color enhancement features in some platforms (the "touch up my appearance" slider in Zoom) actively modify the luminance and color of the face — directly corrupting the rPPG signal. Users running rPPG measurement should disable these features.
Privacy and Consent in Video-Based Vitals
Measuring physiological signals from a video call without explicit disclosure raises significant ethical and legal questions.
Under GDPR and HIPAA, physiological measurements are health data, even if derived passively. Collecting heart rate through a video call without consent could constitute processing of sensitive health data without a lawful basis.
Worse, rPPG can infer more than heart rate. Respiratory rate, stress proxies from HRV, and potentially emotional state signals are all accessible from video. The potential for surveillance beyond stated purpose is real.
Responsible rPPG deployment in telehealth requires:
- Explicit disclosure that video is being analyzed for physiological signals
- Clear data governance: what's measured, what's stored, how long
- Patient control over whether analysis is active
- Regulatory compliance (FDA clearance for clinical claims, CE marking in EU)
Platforms that embed rPPG without disclosure, or that market accuracy claims without clinical validation, face both regulatory risk and user trust damage.
Current Deployment Landscape
Several commercial systems have achieved real deployment in telehealth video contexts:
Binah.ai offers an SDK for iOS, Android, and web that works from camera feeds including video call inputs. They report clinical-grade accuracy in controlled conditions and have published validation studies.
Nuralogix offers the Anura app with contactless vital signs from a 30-second video selfie. Their DeepAffex platform has been integrated into some telehealth workflows.
Neteera and others focus on radar-based contactless monitoring, which sidesteps the video compression problem entirely by using dedicated UWB radar hardware.
The most clinically validated path currently involves a dedicated 30-second still-camera measurement (patient sits still, faces camera, good lighting) rather than passive monitoring during a live conversation.
What rPPG in Video Calls Can Actually Measure Today
Practically speaking, here's what's achievable in a clinical telehealth context with current technology:
- Resting heart rate: 3-6 bpm RMSE with good lighting and a 30-second quiet window. Acceptable for wellness trending.
- Respiratory rate: 1-3 rpm error in controlled conditions. Useful as a secondary check.
- Heart rate variability: Limited reliability. Not suitable for clinical HRV assessment.
- SpO2: Not currently reliable from standard video conferencing cameras.
The technology is genuinely useful for triage, wellness monitoring, and capturing baselines during consultations. It shouldn't replace clinical-grade monitoring for high-acuity situations.
- rPPG Telehealth Remote Monitoring — broader telehealth deployment guide
- rPPG Lighting Conditions Accuracy — lighting optimization for best results
- rPPG Algorithms Deep Dive — the processing pipeline explained
- PPG Home Telehealth Monitoring — contact PPG in remote care
- Remote Photoplethysmography Accuracy Factors — comprehensive accuracy review
Frequently Asked Questions
Can Zoom or Teams measure your heart rate automatically? Neither platform currently has native vital signs measurement. Third-party SDKs can be integrated into custom telehealth platforms built on these video stacks, but standard consumer Zoom and Teams do not measure physiological signals.
Does video compression affect rPPG accuracy? Yes, significantly. H.264 and H.265 compression throws away the small inter-frame variations that rPPG relies on. At typical video call bitrates, rPPG accuracy degrades 3-5x compared to uncompressed video.
Is measuring heart rate through a video call private? Extracting physiological data from video without disclosure raises GDPR and HIPAA concerns. Physiological measurements are health data. Responsible implementations require explicit consent, clear data policies, and regulatory compliance.
What features in video call software hurt rPPG accuracy? Background blur, virtual backgrounds, face beauty filters, and auto-lighting adjustment all interfere with rPPG. Users measuring vital signs via video should disable these features.
How long does a video call rPPG measurement take? Most systems require 30-60 seconds of relatively still, well-lit video for an accurate heart rate reading. Passive continuous monitoring during natural conversation is significantly less accurate due to head motion and speaking artifacts.
Can rPPG work with poor internet connection? Poor connections cause more compression artifacts and frame drops, further degrading rPPG accuracy. A stable connection with at least 1 Mbps uplink is recommended for clinical rPPG measurements.