Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

RED V-Raptor [X] 8K VV

I'm also curious about bright colored highlights (typically red stop lights on cars, or Astera tubes), how does EH handle these ? Are you able to recover some better color information in these too ?
Of course you can. Imagine stopping down 3-4 stops and look at the highlights in that image. That’s what you’ll recover in EH mode.
 

RED V Raptor X & Xl X: 8K120, Global Shutter, 4K 240


By Ordinary Filmmaker


 
So been playing with the X and can't turn on GIO scopes, it is greyed out. just noticed the Red Control app was updated to support X,
 
  • Thread starter
  • Moderator
  • #85
So been playing with the X and can't turn on GIO scopes, it is greyed out. just noticed the Red Control app was updated to support X,

You need to toggle on the colors in the GIOSCOPE menu. They are disabled by default.
 

RED TECH | V-RAPTOR [X] 8K VV




BY Red Digital Cinema


 
Anyone tested dsmc2 screens with the EVF adaptor? Likely it does not work but would be cool if it did.
From RED's EVF FAQ page:

Will the DSMC3 Adapter A support DSMC2 Monitors, RED ONE EVF, or RED PRO EVF (LCOS & OLED)?​


No, these monitors utilize a different video signal format.

Will the DSMC2 RED EVF (OLED) be compatible with the DSMC3 Adapter A?​


Yes, when connected to a DSMC3 camera for the first time, the camera will perform a firmware update on the EVF. If the same DSMC2 EVF is then reconnected to a DSMC2 camera, the DSMC2 camera will update the EVF firmware back to a DSMC2-compatible version.
 
The problem with long exposures is that hot pixels get extremely bad all over the image. You could have a blackshade for long exposures specifically but as I recall it didn't eliminate the hot pixels entirely. I guess the circuitry and processing is really fine tuned for shorter integration times.

Frame averaging or summing is still something you can do in post, though it's a lot of work, and there will still always be a 'blip' between frames as the sensor resets meaning motion blur will not be continuous, basically you can't get all the way to a 360 degree shutter. The nice thing about doing it in camera is it seemed to do the averaging in 16 bit before the compression and encoding, and so up to 16 frames were averaged and written as one in the file, saving space as well.

And you gain much more exposure with frame summing. Wich is cool when shooting at night.

In this example I pushed the sensor and color correction very hard. (no denoise added)


 
I am still on DSMC2 - if DSMC3 doesn't have speed ramping that's too bad... it's one of those situational tools where if you know you have it, you tend find scenarios where it's useful.

I made a preset button where it would speed ramp to 1fps exposure in 1 second, and releasing went back to 24. A lot like hitting the run/stop on a film camera and seeing it wind down and up again, a great effect you still see many music videos and some commercials use. It would work even better if it could ramp in less than 1 second but that was the fastest...
 
And you gain much more exposure with frame summing. Wich is cool when shooting at night.

In this example I pushed the sensor and color correction very hard. (no denoise added)


Not bad at all. Is the banding fixable?
 
424992802-18318765103138590-5944988692044248887-n.jpg
 
The problem with long exposures is that hot pixels get extremely bad all over the image. You could have a blackshade for long exposures specifically but as I recall it didn't eliminate the hot pixels entirely. I guess the circuitry and processing is really fine tuned for shorter integration times.

Frame averaging or summing is still something you can do in post, though it's a lot of work, and there will still always be a 'blip' between frames as the sensor resets meaning motion blur will not be continuous, basically you can't get all the way to a 360 degree shutter. The nice thing about doing it in camera is it seemed to do the averaging in 16 bit before the compression and encoding, and so up to 16 frames were averaged and written as one in the file, saving space as well.
That is actually incorrect. I mean the second part stating that hot pixels are still there after a proper blackshade - I did a lot of timelapse stuff on my DSMC1 red's and hot pixels or noise are not an issue at all when the camera is properly blackshaded. It takes a really long time to calibrate on long exposures (like 1s 360 shutter frame averaged to 1/16 takes about an hour to blackshade, and should be done outside with the environment temperature), but after that it's silky smooth. Even if you get 1 or 2 bad pixels (which has never occurred to me), there's a flashing pixel adjust setting in the raw tab (it's in r3d sdk so is present in any NLE) which totally eliminates hot pixels. More to say, when doing frame averaging - this is equal to temporal noise reduction in resolve, when 16 frames are even much more than resolve has. 3200 iso on Epic MX in timelapse with frame averaging over 8 frames is even more clean than the native 800 iso. One can even go 6400 in timelapse with frame averaging set to 16.

Speaking of a "blip" - no there's no blip at all. At least car trail lights are being recorded like one endless stream. In frame averaging the camera looks 8 frames back and 8 frames forward, and when set to 1fps 360deg continuous and frame averaged to 8 or 16 frames I've never experienced any "blip".

Frame summing is another beast, and basically I find it useless in most cases - because it literally does what is says - it sums up all frames, exaggerating anything that is good or bad in the image, say like - noise. Usually you'd want to use frame summing to add exposure levels, but actually it's better to use frame averaging for this purpose instead of frame summing.
 
Last edited:
Speaking of HDRx - the main problem with HDRx that I've met was that even when used on timelapse/landscape stuff, if one set it to something over 2-3 stops - it actually was equal to boosting up an ISO in camera, because it did introduce a lot of noise from an underexposed image when the streams were combined. I never felt OK with the results magic motion or simple methods gave me - the image always looked somewhat wrong to me, so I've used to mix two streams manually in resolve using layer mixer and luma keying. Still, even in this case the noise from the underexposed image was introduced into the mix, making HDRx looking worse than say shooting at a high iso and denoising in resolve. Again, less space used, less taxing on post hardware (denoising HDRx crashed a lot).

The new method to combine streams seems much superior, and any sample I've seen didn't introduce any additional noise in the image. I suppose people will use it a lot in wildlife/timelapse/nature stuff. Still, I wouldn't necessarily use this in commercials or features, where lighting mix can be controlled to a certain level.
 
So what's stopping RED from allowing the new blending algorithm/option with legacy HDRX streams (and hence have some of, if not all, the benefits of the new Highlight Extension)? Seems they're more similar than originally guesstimated and if the blending is done outside the camera anyway, there is no reason it shouldn't work.... Worst case, you'd have to limit your HDRX bracket to +3 on DSMC1/2, but most don't go beyond that anyway because of the motion artefacts.

Speaking of HDRx - the main problem with HDRx that I've met was that even when used on timelapse/landscape stuff, if one set it to something over 2-3 stops - it actually was equal to boosting up an ISO in camera, because it did introduce a lot of noise from an underexposed image when the streams were combined. I never felt OK with the results magic motion or simple methods gave me - the image always looked somewhat wrong to me, so I've used to mix two streams manually in resolve using layer mixer and luma keying. Still, even in this case the noise from the underexposed image was introduced into the mix, making HDRx looking worse than say shooting at a high iso and denoising in resolve. Again, less space used, less taxing on post hardware (denoising HDRx crashed a lot).

The new method to combine streams seems much superior, and any sample I've seen didn't introduce any additional noise in the image. I suppose people will use it a lot in wildlife/timelapse/nature stuff. Still, I wouldn't necessarily use this in commercials or features, where lighting mix can be controlled to a certain level.

Are you sure you weren't exposing HDRX wrong; your A-track should be set to a normal/regular exposure based on the scene/ISO/etc (using false colour or gioscope or a light meter) while the X-track captures 1~to~6 extra highlight stops as protection beyond that base exposure. I've never noticed any additional noise when using Simple Blend (magic motion stopped working in IPP2, likely because of a gamma mis-match as it wasn't designed for Log3G10, but that's clearly been fixed with Highlight Extension)...
 
  • Thread starter
  • Moderator
  • #98
Between pre-pro, meetings, and a somewhat hectic week I finished up the rest of my boring tests. It's raining hard in LA at the moment, but just to chat about V-Raptor X a bit and some of the things I've been tinkering with.

Global Shutter

Did some fun rigging for handheld as the camera inspired me to relook at my "Belly Cam" setup. Been sharing a bit of this on Instagram, but handheld with Global Shutter = awesome and with the large sensor a joy. The whole Global Shutter thing is fascinating. Some have been begging for it, some may not need it for their work. I'm in the area of need/want/desire as it has clear ramifications for VFX and Virtual Production as well as eliminating skew on pans with very long or wide lenses. There's also a subtle cadence of motion aspect to it that feels very good. Something I certainly found out on Komodo(s). But wherever you land on the spectrum of how you feel about Global Shutter, it's extremely impressive what RED has done in this category of digital cinema camera.

Extended Highlights

More tinkering, more charts, more experiments. I feel that the initial goal is not to rely on it, but rather to deploy it as needed. However, as you gain an understanding of how it works, you can do some very interesting things. The TL;DR here is how it is implemented with the 3+ stops, main exposure, and temporal blending of about 6 frames surrounding the base exposure. There is also some sauce in their new blend. And yes you can break it with fast motion, but it's more about what you can do if you don't break it. To the point above, it may be something that can be thrown into "rolling" Raptor, but I'm unsure if there have been more hardware changes for the internal processing. If it can be added, I know for sure Global Shutter is doing some of the "good stuff" here in terms of capturing adjacent frames instantaneously.

Audio!

Lots of stuff in camera, went through about a dozen of the 40 or so mics I have here. Experimenting with gain in camera with mics plugged directly in. All very good really. Some really have been looking for an improvement here and I can say the preamps are better side by side. I do want RED to continue pushing forward with sound/audio down the line as well.

Optical Chamber

New internal optical path, flocking, and some visible changes = good. I have yet to see a seam with flaring. I gave up after 2 hours, lots of different lights, and lots of different lenses. Visibly on the sensor without a lens mount, you can see it, but I am not seeing it in footage thus far. I've inquired about this a bit and have been told it's a combination of the sensor design, new optical path, and hardware overall. Hard to say if the locking collar is notably different or better, but it's been fine.

Face Detect Autofocus

Early days and good enough to get you through an interview and some creative work. You can outrun it for sure, but if you are landing a wide, medium, or closeup it works well. I've been tinkering with the settings like nearest to the camera and all that. Pretty interesting. 3 days this week I actually only had RF glass on the camera to see how autofocus overall and usability works. I do think the newer RF focus by wire is a bit more responsive than the EF lenses overall, which makes sense as the newer motors are more powerful. In terms of AF speed, some lenses "could be better", some are pretty damn good. The 24-70mm f/2.8L IS for instance is very, very good on V-Raptors and Komodos. Quiet and quick to focus. The L Primes leave a bit to be desired in terms of speed and silent operation. I do think Canon is on the verge of nailing the whole thing down with their latest zoom showing some of the power of what they can achieve with a more motion-centric RF design. Though in the case of the 24-105mm f/2.8L the heavy reliance on lens correction frustrates me a bit.

Working with other RED cams.

Briefly back to tests, I've been investigating working side by side with the rest of the DSMC3 cameras. Easy to do in terms of matching. RED's base calibration target is the same across bodies, you'll likely see more variance from the optics and angle of incidence from light than anything, which is par for the course. Raptor 8K VV and Raptor X have really, really good color reproduction as far as I'm concerned. I think most of us know this at this point. Skin looks solid. I did whip out some tungsten and am always reminded how much I love working with hot lots, though rarer these days.

This week is a bit nasty with weather and Tues & Weds being all day meetings. But I'm going to try to throw together a compelling test shoot to show some of this stuff off if time permits.
 
So what's stopping RED from allowing the new blending algorithm/option with legacy HDRX streams (and hence have some of, if not all, the benefits of the new Highlight Extension)? Seems they're more similar than originally guesstimated and if the blending is done outside the camera anyway, there is no reason it shouldn't work.... Worst case, you'd have to limit your HDRX bracket to +3 on DSMC1/2, but most don't go beyond that anyway because of the motion artefacts.



Are you sure you weren't exposing HDRX wrong; your A-track should be set to a normal/regular exposure based on the scene/ISO/etc (using false colour or gioscope or a light meter) while the X-track captures 1~to~6 extra highlight stops as protection beyond that base exposure. I've never noticed any additional noise when using Simple Blend (magic motion stopped working in IPP2, likely because of a gamma mis-match as it wasn't designed for Log3G10, but that's clearly been fixed with Highlight Extension)...
Yes I'm aware how to use HDRx;) As I've mentioned - it does appear when you mix the streams manually using blending in Resolve. Magic Motion or Simple doesn't introduce noise, but they tend to look somewhat wrong to me, and pulling a useful image of a landscape from it is really hard somewhy. It gets worse as you go more stops, because the sensor is blackshaded to the base exposure and going under on shutter is basically going off the blackshade, thus introducing more noise into the second stream.

I highly doubt they will ever introduce it to older cams, their marketing is pretty clear on this, and it does seem to me that the older gens of cams aren't going to get even raw benefits as of now, maybe except debayer in the sdk.
 
Back
Top