Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

Viper Film Stream

Of course it's games - if you don't frame the chart in one dimension, the numbers in that dimension don't mean what those numbers say any more. You can't say you have a vertical resolution of 1400 or so if you can only ever count 1000 of them before falling off the edge of the chart! It matters not that those lines are there on the chart outside what the camera captured.

Graeme

You will make me go to the office now!

If you have with RED, on the same chart in full frame 1500 lines in the center and forced to crop the image to do a 2,35:1, the resulted image that will have missed the top and the bottom, will look exactly like the one above... what it would change in order to make the lines more, than what you see here?

Grab a frame of the chart with RED, full frame ofcoarse 16:9, export 4K tiff, go to Photoshop see the line count, they will be 1500, then crop it to be 2,35:1 and count the lines again would they be more? no they will stay 1500...

Put Viper in scope mode, frame the chart in exactly the same area that the above 2.35:1 cropped RED frame has, export a tiff that is anamorphic 1920 x 1080, open Photoshop make a new image with the same resolution as the previous cropped RED has (that is 4096 x 1742), fit the image HV... and the result is the above Viper image.

That shows the same 1450 lines...

What games?
 
We all know that a sensor with 1080 pixels vertical cannot ever resolve more than 1080 lines vertical. It doesn't matter if you bin sub-pixels or not - we have 1080 rows of pixels that get recorded. There is absolutely no way that 1080 rows can show more than 1080 lines. If your chart is showing otherwise, we must look at your measurement methodology.

TV charts are usually labeled in lines per picture height. That means, for the numbers to mean what they say, you must frame the chart so that the top and bottom of the chart are lined up with the top and bottom of the picture. By lining it up so that the left and right match, and the top and bottom fall off, those numbers no longer mean what they say.

Back in the days of PAL and NTSC with a fixed number of horizontal lines that made up the picture - it made sense to measure resolution in lines per picture height to give a fair number across different aspect ratios.

However, with film, we've always measured resolution horizontally because of the wide variety in aspect ratios, and the use of anamorphics.

That is why I use my circular sinusoidal zone plates for measuring resolution. I frame horizontally as I want to measure horizontal resolution. I get accurate, repeatable results, great measures of any aliasing and can plot MTF very quickly. Resolution "trumpets" are not anywhere near as nice nor easily interpreted for such measurements.

Graeme
 
We all know that a sensor with 1080 pixels vertical cannot ever resolve more than 1080 lines vertical. It doesn't matter if you bin sub-pixels or not - we have 1080 rows of pixels that get recorded. There is absolutely no way that 1080 rows can show more than 1080 lines. If your chart is showing otherwise, we must look at your measurement methodology.

OK I will go to the office, hope I will find it...

It is a 1920 x 1440 image that falls on the Viper sensor... and then is cropped to Scope anamorphic 1920 x 1080 inside the sensor by binning the pixels differently (3 pixels one output)...

I never said that it has higher resolution than RED, I just said that it has slightly less than RED's and much more than all the rest...

So If I find it because its being shot a year ago... you will see what I'm saying...

In an hour I will be back...
 
Thanks Evangelos. From what I read from the Thomson papers on Viper, binning the sensor in 3 gives the whole sensor an effective pixel resolution of 1920x1440, of which only the middle 1080 rows are read out in 2.37:1 mode. Is that also your understanding of the process?

Graeme
 
Graeme,
Correct me if I'm wrong but is this not a little bit of a mute point, after all no mater what it records at 1080P 4:4:4 The sensor I think we all agree can resolve that, but that can't ever be greater than that recording limiter. From the glass to the sensor to the recording medium. I'm I missing something here?

The only real question I see here is, whats better, putting Hawk 35mm anamorphic glass the V-lites for 16:9 sensors or having it do its CinemaScope mode for recording. I would say its CinemaScope mode would be a better choice.
 
Thanks Evangelos. From what I read from the Thomson papers on Viper, binning the sensor in 3 gives the whole sensor an effective pixel resolution of 1920x1440, of which only the middle 1080 rows are read out in 2.37:1 mode. Is that also your understanding of the process?

Graeme

I'm back...

Yes this is what I say...

The RED Viper res comparison... with REDalert 20.1.8...

Vertical_center_res.jpg


Following are the full frames...

Viper.jpg


RES-chart-RED.jpg


This is the R3D that resulted the res chart above

http://www.motionfx.gr/Files/A001_C001_080314_001.R3D.ZIP

Remove the .ZIP extension after the download... it seems that there is a problem with the R3D extension...

Its easy observed that RED resolves higher resolution than Viper.

The difference is not huge as its between RED and F35 or F23...

Viper surpass the 1300 lines easily, having in mind that all the above is with a 35 adapter so the light passes from a Zeiss 50mm ZF F2,
then from a spinning ground glass, a condenser lens, a flipping prism, a relay lens, the CCD prism block and finally to the 3 CCD's...

My Canon Cinestyle zoom cant go higher than 1200 lines, so I dont know if I put a DigiPrime how clean the image could be...

The fact is that an F900r can't go higher than 900 lines, so an F23 would be similar, the F35 can't go higher than 1000 lines, the same with Si2K,
similar number you get with 3700 etc.

D21 with anamorphic lenses is reaching the same levels...

So yes Viper is one of the best cameras existing in terms of resolution...

I don't want to go into the discussion of what could be the same comparison if we compare a red light or blue light
transmitive resolution chart... because what we compare is just the luminance channel that results from all R G B pixels...

Let this legacy camera (Viper) do few more features until it vanishes forever... it is a real breakthrough in the digital cinema history.

So no games.
 
Link to R3D is broken.

Graeme
 
Someone should check my math here but:

If you have a 16:9 chart, which measures resolution in TV Lines per picture height, and you have a 16:9 camera and frame the chart, then, if it says 1000 lines, that means the horizontal resolution is 1000*16/9 = 1777, or 1.7k in film-speak.

Now we do the funky 2:1 aspect ratio pixels. The full sensor has 1920x1440 of them, but we use the middle 1080 because that's all we can record to existing tape formats, and it conveniently gives us a 2.37:1 aspect ratio when un-squeezed by 1.333333.

And as luck would have it, the vertical resolution of ~1400, when divided by the 1.333 gives us 1050 lines (just under the max 1080) and because there's reasonable MTF there, that accounts for the mild aliasing we see. 1050 vertical res, *16/9 to get horizontal 1866, which is about right for a 1080p system with max horizontal rez of 1920.

Does that work for an explanation for the numbers?

Graeme
 
lines visable vs. lines scanned

lines visable vs. lines scanned

Sorry to get into the issue, but it is possable to resolve more lines in what you see than the pixel count if the subject is moving over the sensor.

The aliasing will average and some details will reinforce and the contrast will be lower than the peak at about 0.7x the line count, but you should be able to get visable resolution on moving subjects at un-even ratios of the line count from frame to frame.

Movie images seem to have twice the detail when moving as in a still frame because your mind blends detail from one frame to another to "see" details smaller than the grain size.

Likewise, slowly moving the sensor would let it pick up more detail than having it stationary when you watch the images and intergrate the sub-pixel details.

You need true RAW sensor data to see those sub-pixel effects because the wavlets used in some compressing methods will disturb the sub-pixel values.

Movie film cameras weave the image around, that lack of steadyness helps detail pass through digital scanning and projection at a sub-pixel level, if you shoot with a stationary digital camera you do not have the constant motion of the subject around on the grid of pixels falling part way off to the right on one frame and part way off the left on the next frame and so on. That cosntant motion is also used by our eyes to hide the grid of cells we see with, the brain smooths out the dots and connects the details that pop on and off as our eyes shake in micro movements.

Old analog video looked more "live" (on non stick camera tubes, not the later ones) because the raster on the tube and monitor moved over the image details, and you could get a little past the line count in the vertical if the electron beam was focused well. It was a high order interlace (related to the AC ripple in the power supply of the camera).
 
Graeme, what you say is right...

The bottom line is, when we need the big cinema format which is the Cinemascope 2,37:1 for whatever reason, with cameras that can shot only spherical and they have a 16:9 sensor like RED, F35, F23 etc we do the crop.

Crop = loss of resolution

The solution is either an anamorphic adapter like Canon AVC235 or the Hawks 1.33 anamorphics...

Or a trick like the one Viper is doing, that allow us to use our regular spherical lenses and get all the pixels to count...

That alone puts Viper on top of all the 1080p cameras...

RED users should learn that the Hawk Anamorphics are the only way to get all pixels to count when shooting in Scope format...

RED users if they don't use Hawks, a Viper with DigiPrimes could make high resolution images almost as high as RED can... plus mechanical shutter and good tungsten performance and unfortunately minus 35mm DoF...
 
Would there be any benefits to shooting 1920 x 1080 with a 1.333x anamorphic lens on the Viper; instead of using CinemaScope mode? If so where could I rent Anamorphic lenses?
 
Evangelos - thanks. Glad we got to the bottom of that and learned more how the Viper works in the process!

Cropping does indeed imply a loss of resolution. And the Viper has a clever solution to this issue - it does crop at the sensor level, but still manages to put a full 1920x1080 meaningful pixels to tape. That's a darn good idea. However, what we did was to go out to 4.5k giving at least a 3.5k measured horizontal rez, and around 1.5k measured vertical rez, which, although cropped from what the sensor sees, is still more than enough resolution to do the widescreen justice.

Graeme
 
Would there be any benefits to shooting 1920 x 1080 with a 1.333x anamorphic lens on the Viper; instead of using CinemaScope mode? If so where could I rent Anamorphic lenses?

There are no anamorphic lenses for 2/3 b4 mount.

There is just the Canon AVC-235 that has being out of production, it gets in front of the camera, it adds 15 cm and inverting the image, and after that you put the lens... so its Lens >> AVC-235 >> Camera

Or you use Viper's scope mode...

If you use an adapter like SGblade with my GG.relay in S35 to get the missing 35mm DoF you can still benefit from Vipers scope mode but you will lose 1 stop so Viper will became 160ASA... which is very close to what RED and D21 have as general consensus....

Evangelos - thanks. Glad we got to the bottom of that and learned more how the Viper works in the process!

Cropping does indeed imply a loss of resolution. And the Viper has a clever solution to this issue - it does crop at the sensor level, but still manages to put a full 1920x1080 meaningful pixels to tape. That's a darn good idea. However, what we did was to go out to 4.5k giving at least a 3.5k measured horizontal rez, and around 1.5k measured vertical rez, which, although cropped from what the sensor sees, is still more than enough resolution to do the widescreen justice.

Graeme

A good idea is to do some kind of binning in the up coming cameras to mimic what Viper does......

Yep is good enough on both cameras...

For all non RED users I hope its clear why Viper after seven years, still is the best 2/3 camera ever build for digital Cinema...

And why a film shoot for CinemaScope on F35 without Hawks would be always inferior of what a Viper can do...

So for me to summarize it... Its RED or D21 or Viper... in any random order...
 
Well I just emailed Band Pro Digital about altering or constructing new digiprimes that are anamorphic... still waiting to hear back. And was thinking more of 35mm anamorphic lenses with an adapter.

New DigiPrimes??? ok...

To put an anamorphic 35 to an adapter needs a 4:3 sensor... Viper is 16:9... so its not working either...
 
The binning on Viper is only useful though due to the highly non-square pixels it uses. Binning gives you no help in a square pixel camera. Also, the other key thing about the Viper approach is that half of it's benefit is that the camera is tied to 1080p output. It's a way of squeezing more of the sensor information into that fixed bucked which is the 1080p output. With what we're doing, we just keep making the bucket bigger.

Graeme
 
Unless I'm misunderstanding something, those 1:1 crops make it no conetest: the RED blows the Viper away for image quality.

The image in the 1:1 crop its being made with very odd lens system...

The light in this crop passes through:

1. Front lens Zeiss ZF 50mm
2. SGblade 35 DoF adapter spinning ground glass
3. Condenser lens to minimize vignette
4. Image invert prism block
5. Relay lens that mounts to the camera
6. CCD prism block
7. finally on 3 x CCD

Did you expect something cleaner? RED has just a lens...

Viper_blade_5.JPG



There are also other qualities apart from luminance resolution that are making one of the three unique...

For instance on Viper there are 1920 x 4320 = 8294400 individual pixels that capturing the Red channel and another 1920 x 4320 = 8294400 that are capturing the blue channel... on RED we just have 2240 x 1260 = 2870200 for red channel and a similar number on the blue...

Do you thing that this huge difference (8.294.400 pixels vs. 2.870.200 pixels of RED) doesn't give the slightest advantage to Viper for... let's say... skin tones in low light?

On RED is the REDcode RAW workflow that makes it unique along with the S35 sensor...

D21 has an S35 4:3 sensor and a mechanical shutter...

Viper has a 27Mpixels sensor with mechanical shutter and it has also a Scope mode with spherical lenses... but it doesn't have the S35 sensor...
 
Ah, but the whole point of this thread is not what pixels you have, but how you use them....

The Viper uses up to 6 sub-pixels to make a single pixel. It's a clever system that allows them to address different pixel aspect ratios for 720p, 1080p and 2.37:1 aspect ratio images from the same sensor. Very clever, but.... The sub pixels are also very small.... And what is key is pixel design. Many factors.

Luma resolution is not the be all and end all of imagery, but it's a primary component of what we see, so it's important.

Graeme
 
Back
Top