Skip to main content

The Reframing Problem

When you crop a 16:9 master to 9:16 for Stories, you lose 75% of the frame. A subject centered in widescreen might be cut off entirely in portrait. Manually repositioning the crop for every shot across every format is tedious and expensive.

How Smart Reframe Works

Versionary uses Google Video Intelligence API with person detection to analyze every frame of your source video. The system:
  1. Detects shot boundaries — identifies every cut in your footage
  2. Tracks subject position — locates the primary subject’s face and body position in each shot using nose landmark tracking
  3. Calculates per-shot crops — positions the crop window to keep the subject centered for each target aspect ratio
  4. Builds dynamic expressions — creates frame-accurate crop positioning that changes at every shot boundary
The result: each aspect ratio gets its own intelligent crop that follows your subject through the entire spot, changing position at every cut.

Supported Aspect Ratios

RatioCommon UseCrop from 16:9 Source
16:9YouTube, Broadcast, OLVFull frame (no crop)
9:16Instagram Stories, TikTok, Reels56% of width preserved
4:5Meta Feed, Instagram Feed75% of width preserved
1:1LinkedIn, Square placements56% of width preserved

Accuracy

Smart Reframe achieves approximately 75% accuracy on initial auto-crop positioning. For shots where the AI doesn’t nail the framing — action sequences, wide shots with multiple subjects, or artistic compositions — the conversational editing system lets you make instant corrections.

Under the Hood

The system builds nested FFmpeg conditional expressions that evaluate at each shot boundary:
crop=608:1080:if(lt(t,1.5),420,if(lt(t,3.2),538,if(lt(t,5.0),612,420)))
This single expression tells FFmpeg to position the crop at x=420 for the first shot, x=538 for the second, x=612 for the third, and so on — all in one render pass with zero quality loss.